41% glyphosate-cancer increase claim under fire: Did authors of new meta-study deliberately manipulate data or just botch their analysis?

screen shot at am
Could exposure to glyphosate—an herbicide often paired with genetically engineered corn, soybeans, cotton, and other crops—be causing cancer? That question has become the central contention advanced by critics of agricultural biotechnology.

Glyphosate is the active ingredient of Monsanto’s Roundup and is now off patent. It’s the world’s most widely sold herbicide, used by farmers and home gardeners since 1974. Multiple studies and assessments from every major regulatory agency around the world have concluded that glyphosate poses no significant health risks, either to the public from trace residues in our food or to workers exposed through farm work or manufacturing.

[Editor’s note: Read the Genetic Literacy Project summary analysis of the research on the potential dangers posed by glyphosate]

However, a new analysis posted online on February 10 in the journal Mutation ResearchExposure to Glyphosate-Based Herbicides and Risk for Non-Hodgkin Lymphoma: A Meta-Analysis and Supporting Evidence—raises a cautionary flag, challenging the consensus that glyphosate is safe. The paper includes no new data. It is what is called a “meta-analysis” because it crunches data from multiple studies. Large meta-analyses are often considered more reliable than individual studies.

Paper notes what researchers say are the relevant human studies

What are we to make of this study? The authors’ research purports to show that human exposure to glyphosate increases the risk of non-Hodgkin’s lymphoma, or NH, by an eye-popping 41%. The first author is Luoping Zhang, adjunct professor of toxicology at of the University of California-Berkeley. I will refer to the study as the Zhang paper.

The final version of the paper is not yet published. Nevertheless, the available preprint has already attracted controversial media attention and stirred much Internet debate. An article citing the study by Carey Gillam, a prominent anti-glyphosate activist and lobbyist, and former reporter at Reuters [see footnote below detailing Gillam’s controversial history on reporting on this issue], appeared in The Guardian on February 14 under the headline: “Exposure to weed killing products increases risk of cancer by 41%. Evidence ‘supports link’ between exposures to glyphosate herbicides and increased risk for non-Hodgkin lymphoma.” For days afterward, the Gillam article became the most widely circulated analysis of the study. It heightened concerns among some people that the global consensus finding glyphosate safe somehow might have missed key evidence, deliberately or unintentionally.

The world’s most studied herbicide

Before examining the actual Mutation Research paper, some background is essential.

The question whether the world’s most widely-used weed killer causes cancer has been the focus of intense controversy since March, 2015, when the International Agency for Research on Cancer (IARC) issued a report classifying glyphosate as a “probable carcinogen.” IARC is a controversial agency as it stands alone in what it does in the global regulatory community. IARC does not evaluate risk in the way other health agencies do, taking into account actual exposure to a substance or agent in the real world, in terms of intensity and duration of the exposure. Rather, IARC chooses to evaluate “hazard”—that is, whether a substance or agent could possibly cause cancer under some conditions, no matter how far removed from everyday experience. Under this much-relaxed standard, of the approximately 1000 agents that have been classified by IARC with respect to carcinogenicity, only one was judged by the Agency to be “probably not carcinogenic.”

The IARC’s list of “known” (group 1), “probable” (group 2A), and “possible” (group 2B) carcinogens includes: sunshine; mobile phones; alcoholic beverages; including organic wine; wood dust; coffee; outdoor pollution; working as a hairdresser; wood smoke; working night shifts; hot tea, red meatand the herbicide glyphosate. Only the glyphosate designation has led to worldwide protests by advocacy groups and concern by government agencies in some countries, particularly in Europe.

[Editor’s note: Read the Genetic Literacy Project’s profile of IARC]

IARC’s conclusion regarding glyphosate’s potential danger to workers conflicts with the assessment of every other health or regulatory agency that has reviewed the safety of the herbicide.  That includes the Environmental Protection Agency, European Food Safety Authority, Food and Agriculture Organization in a joint study with the World Health Organization, European Chemicals Agency, Health Canada, German Federal Institute for Risk Assessment, and others. These agencies have all concluded that, at the levels to which farmers and the general population are exposed, glyphosate does not pose a cancer risk.

The EPA’s Carcinogenicity Peer Review Committee specifically rejected IARC’s claims that epidemiological studies raise questions of a likely NHL cancer link, concluding:

The epidemiological evidence at this time does not support a causal relationship between glyphosate exposure and solid tumors. There is also no evidence to support a causal relationship between glyphosate exposure and the following non-solid tumors: leukemia, multiple myeloma, or Hodgkin lymphoma. The epidemiological evidence at this time is inconclusive for a causal or clear associative relationship between glyphosate and NHL [non-Hodgkins lymphoma].

In the wake of the IARC finding, the World Health Organization again reviewed and dismissed IARC’s findings. The toxicity was so glyphosate who programslow, the committee wrote, it was not necessary to establish a ARfD–an acute toxicity reference dose often used to regulate risk. It also reviewed its impact on workers, noting that the only “high quality” study found no evidence of a cancer link.

In a separate review, a joint panel from WHO and the Food and Agriculture Organization of the United Nations issued a review of glyphosate in May 2016, concluding it poses no cancer risks as encountered in food and does not impact our genes.

Following the release of the IARC report, a major epidemiological study was published based on data collected by the Agricultural Health Study.  The AHS gathered data on 45,000 people who had handled glyphosate beginning in the mid-nineties. The authors of the paper (by Oxford University epidemiologist Gabriella Andreotti and colleagues, “Glyphosate Use and Cancer Incidence in the Agricultural Health Study,” 2018) concluded: 

In this large, prospective cohort study, no association was apparent between glyphosate and any solid tumors or lymphoid malignancies overall, including NHL and its subtypes. There was some evidence of increased risk of AML among the highest exposed group that requires confirmation.

It should be noted that IARC’s conclusion was based on animal evidence (studies conducted in rats and mice), rather than on human, epidemiological evidence, which the IARC considered to be “limited.” IARC also has been criticized for selecting the few “positive” results from rodent studies that seemed to show an increased tumor yield in exposed animals, while ignoring exculpatory results that showed decreasing tumor yield in exposed animals.

Despite many questions surrounding the IARC glyphosate report, its conclusion has stirred widespread concern among the public and has been taken up by environmental activists. In addition, there are currently roughly 9,300 pending lawsuits in US courts brought by plaintiffs who claim that their cancer was caused by exposure to Roundup, which contains glyphosate as its active ingredient.

In the first case to go to trial, a California school groundskeeper, Dwayne Johnson, sued Monsanto (the manufacturer of Roundup), claiming that his terminal NHL was caused by his exposure to Roundup in the course of his work. Last August, he was awarded $39 million in compensatory damages and $250 million in punitive damages. The punitive damages were later reduced to $39 million. The jurors claimed in interviews that they were heavily influence by the IARC hazard designation.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Does the meta-analysis undermine the global consensus that glyphosate poses no serious health danger?

Let’s now turn to the claims in the Mutation Research paper.

The background summarized above is relevant, because the Zhang paper refers to “considerable controversy” surrounding the question of the carcinogenicity of glyphosate. The authors describe a situation in which opinion is evenly divided, which is not accurate. They appear to deliberately ignore the overwhelming consensus among health agencies regarding the safety of glyphosate. The EPA, Health Canada, along with the European Food Safety Authority (EFSA) and the German BfR, addressed and dismissed IARC’s cancer finding after its release in 2015.

The Zhang paper also does not make mention of the serious science and ethical questions raised pertaining to IARC’s assessment. There is evidence, noted in an extensive multi-part investigation by Reuters’ Kate Kelland (see here, here) that IARC had initially concluded that the weight of evidence showed glyphosate posed no serious carcinogenic threat, but that conclusion was changed days before the report’s release. The Invited Specialist for the IARC panel that evaluated glyphosate, Christopher Portier, began working with law firms suing Monsanto within weeks of IARC’s release of its classification of glyphosate as a probable human carcinogen was announced. IARC has denied Reuter’s findings.

unknown
Epidemiologist Geoffrey Kabat

I should note that this is a very long paper—three or four times the length of the average paper in epidemiology. The unusual length is due, in large part, to the fact that the authors include many secondary analyses by which they attempt to bolster the case they are making. However, the secondary analyses serve to obscure much more important issues, which the authors avoid addressing.

The bulk of the paper is devoted to a meta-analysis of the small number of epidemiologic studies that examined the association of glyphosate exposure and risk of developing NHL.

In the main analysis presented in the paper, Zhang and the other researchers combine the results from the large Agricultural Health Study cohort with the results of five case-control studies. The result was a summary relative risk of 1.41 (95% confidence interval 1.13-1.75).  This means that, compared to those who were not exposed to glyphosate, those exposed to the compound had a 41% higher likelihood of developing NHL.

For context, although the 41% figure looks large and alarming, this statistic represents a very modest increase in risk. NHL is a rare disease. In the US, roughly 20 new cases are diagnosed per 100,000 men and women each year. If the 41% figure is substantiated (which is not clear from the evidence in this paper), that would mean that 8 additional new NHL cases would be expected each year for every 100,000 exposed to glyphosate. But the 41% figure, as we will see below, is almost certainly too high, based on the best human evidence. Yet, the authors highlighted this questionable statistic in the abstract, more than likely knowing that it would be picked up by journalists and activists, and would likely instill fear in the public.

That’s exactly what has happened.

Questionable data result in questionable conclusions (junk in, junk out)

Meta-analysis is a statistical technique used to combine a number of relatively small studies in order to obtain a more stable, and therefore more credible estimate of an association. A meta-analysis produces a summary relative risk (RR), which is a weighted average of the RRs from the individual studies.

The cardinal requirement for conducting a valid meta-analysis is that the individual studies are similar enough in their methods, study design, and data quality to justify combining them to obtain an overall summary measure of risk. In other words, the results of a meta-analysis are only as good as the individual studies that go into it.

In their primary meta-analysis, Zhang and co-authors combine 6 studies. Five of these are case-control studies; one is a cohort study. In a case-control study, researchers identify cases of the disease of interest (through hospitals, cancer registries, etc.) and selects a comparison group that is generally similar to the case group but is free of the disease under study. Cases and controls are then interviewed about their personal habits and past exposures. This method has the strength of enabling one to enroll large numbers of cases, even when a disease is rare, as in the case of NHL.

However, a major weakness of the case-control study design is that one obtains information about exposures of interest from cases, after they have already developed the disease. Cases may respond to questions about their exposures differently from controls. Specifically, cases may be more apt to ruminate about what caused their illness, and this may lead them to emphasize their exposures, whereas the controls do not have the same motivation. This is referred to as “recall bias” and can lead to a spurious association.

An additional problem, which is pertinent to the issue at hand, is that population-based case-control studies are not suitable for studying environmental or occupational exposures, due to the small percentage of people exposed to any particular agent.

Cohort studies start by enrolling a study population (a cohort) that can be assessed at the outset in terms of their health and exposure history and then followed for a number of years in order to identify new cases of disease that develop during follow-up.   Cohort studies usually take more time and are more expensive to conduct than case-control studies.

Furthermore, the cohort needs to large enough and followed for an adequate duration in order to obtain enough cases of a rare disease to evaluate the association of interest. The principle advantage of a cohort study over a case-control study is that in the former, the researcher obtains information about the exposures of interest prior to the development of disease. Thus, recall bias is not an issue in cohort studies. An additional advantage is that cohorts can be made up of people working in a particular occupation, which increases the exposure prevalence to occupational exposures of potential interest (e.g., pesticides in a cohort of farmers).

Why the data do not support the conclusions of the Zhang paper

In this case, the researchers perform many subsidiary analyses in their analysis of whether the estimate of a 41% increase in risk for those exposed to glyphosate stands up under different assumptions. But much of their lengthy discussion is beside the point and serves only to distract the reader from what is the key question regarding their analysis: Are the different studies sufficiently comparable in the quality of their data and the calculation of risk to justify combining them?

A look at Table 4 of the Zhang paper, which reports the results of the individual studies combined in the meta-analysis, helps answer this question. screen shot at am
Detailed exposure information was available in the AHS enabling the researchers to classify the study population of ~54,000 pesticide applicators into quartiles of exposure. The risk estimate selected by Zhang et al. (from among many results in the 2018 paper by Andreotti et al.) for farmers in the quartile with the highest exposure in AHS, compared to farmers unexposed to glyphosate, is 1.12 (95% CI 0.83-1.51), indicating no increased risk for those with the highest cumulative exposure. (We will return to the crucial choice of this estimate below.) Owing to the large size of the AHS, the confidence limits are fairly narrow. In fact, 440 of the 575 NHL cases in the AHS study were exposed to glyphosate.

If we look at the case-control studies, the risk estimates for 4 of the 5 studies were elevated, ranging from 1.85 to 2.36, while the remaining study showed no elevation in risk. The confidence intervals are much broader, reflecting both the smaller size of the case-control studies, and the smaller number of cases who are exposed to glyphosate. The number of cases in all of the case-control studies is only 136 out of a total of 2,836 NHL cases.

As mentioned earlier, a key point that is not well understood, even by some epidemiologists, is that population-based case-control studies of occupational exposures have relatively small—often, very small—numbers of cases and controls who are exposed to the agent of interest. This not only means lower statistical power to detect an effect, but it also tends to produce estimates that are highly unstable (that is, small changes in how one categorizes exposure can result in large differences estimates).

Authors misuse the term “exposure”

A related point that is clear from Table 4 is that “exposure” does not mean the same thing in the different studies. Whereas, in the AHS, when estimating risk the highest quartile of exposure is contrasted with “no glyphosate exposure”, in three of the case-control studies the exposure contrast is simply “ever” vs. “never exposed to glyphosate.”

In two other case-control studies, the definition of exposure is “greater than 10 days/year” (vs. “no exposure to any pesticide”) and “greater than 2 days/year” (vs. “no exposure to glyphosate”). Thus, the exposure classification in the case-control studies is much cruder than in the AHS, and one would not expect such crude, dichotomous, comparisons to show a higher risk than the sharper contrast used in the AHS between the highest and lowest exposure groups.

The AHS examined glyphosate exposure in relation to the risk of 20 different cancers, including different types of lymphohematopoietic cancers, including NHL. In their analysis, the researchers adjusted for exposure to other pesticides, as well as for important confounding factors, such as smoking and body weight. In contrast, the case-control studies focus on a single type of cancer, NHL, which actually has different subtypes. And they were unable to adjust for many confounding factors.

Owing to the size of the AHS and the large number of cohort members who were exposed to glyphosate, and the long follow-up of the cohort, this study provides much finer-grained information about the health effects of exposure than the case-control studies with the weaknesses described above.

Given the differences in data quality and methods between the case-control studies and the cohort study, it is highly questionable to combine them. The authors devote a lot of space to discussing potential weaknesses of the AHS to explain why it might have failed to detect a positive association with glyphosate exposure. Much of this discussion is beside the point.  They devote much less space to describing the real deficiencies of the case-control studies.

One further point needs to be made

When conducting a meta-analysis, one is often faced with the choice of which risk estimate to use from a given study, which may present a number of different risk estimates. The updated analysis of the AHS by Andreotti et al. (2018) presented a large number of risk estimates resulting from different analyses of lymphohematopoietic malignancies, including NHL. These include results for 5-year, 10-year, 15-year, and 20-year lag periods. Zhang et al. chose to use the 20-year lag result for inclusion in the meta-analysis (RR = 1.12, 95% CI 0.83-1.51).

In fact, the unlagged and the 5-year, 10-year, and 15-year lagged RRs for the highest quartile are all below 1.00 (0.87, 0.87, 0.83, and 0.94, respectively). There is no particular justification for picking the 20-year lagged result, as Zhang and her team. do. They could just as reasonably have picked the 10-year lag analysis, which gave RR = 0.83 (95% CI 0.62-1.10).

It is also interesting and perhaps revealing to note that the 20-year lagged RR was the largest of five risk estimates presented in the paper and the only one above 1.00. If Zhang et al. had picked the 10-year lagged RR for inclusion in the meta-analysis, the overall result would likely not have been statistically significant, since, even with the selection of the largest RR, the lower confidence limit of the summary RR is barely above the threshold for statistical significance (lower bound = 1.13). (The data from the AHS account for more than 50% of the total data in the meta-analysis, so using a RR below 1.0 would exert a strong downward pull on the summary RR).

Recapitulating the keys points

  1. Zhang and the other researchers set out to combine the results of studies of drastically different quality. Yet they never question the appropriateness of conducting a meta-analysis, which, in this case, is the weighted average of one high-quality cohort study with five case-control studies of much poorer quality.
  2. Confronted with the choice of which risk estimate to select from the AHS, the researchers chose the highest RR of 5 reported in Andreotti et al. (2018), thus, ensuring that the resulting summary RR would reach statistical significance.
  3. In order to give their paper the appearance of academic rigor, the authors conducted a huge number of secondary analyses, varying different conditions, to convince us that the 41 percent increase in risk is a solid result that is not affected by varying different aspects of their analysis. But these “sensitivity analyses” and subtle statistical considerations are presented instead of addressing more basic issues that determined the results of the meta-analysis. For example, if the authors were truly interested in the validity of their meta-analysis, they would have acknowledged the weakness of the case-control studies. Furthermore, they would have presented an analysis showing the effect of using each of the 5 different risk estimates reported in the AHS study, not just the highest one. Such an analysis would likely have shown that using most of the RRs reported in the AHS in the meta-analysis yielded a result that was not statistically significant. Of course, this would have been much less newsworthy and would have made their paper much less likely to be published.
  4. The authors highlighted the 41% result, which they almost certainly realized would grab headlines and inspire fear.

What might appear to those unfamiliar with the data or ways in which data can be manipulated as a dispassionate academic study is anything but. One can’t escape the impression that the authors deliberately included a selected and unrepresentative result from the highly-respected AHS in their meta-analysis and use the far inferior case-control studies to jack up the summary of relative risk to obtain a statistically-significant finding. The authors appear to have judged that few lay people including journalists, and even many scientists, were likely notice the sleight of hand amidst the large number of secondary analyses and lengthy obfuscatory discussions.

It is ironic that the authors cite the classic paper by John Ioannidis, “Why most published research findings are false” (2005), since, by making the cardinal errors pointed out above, they have produced a result that no amount of secondary analyses and statistical fine points can make up for.

One final observation. This paper underwent peer review, most likely with at least two outside reviewers as well as the editor(s) at the journal evaluating it. We must ask how such a misleading and tendentious paper could have passed the peer review process.

screen shot at amNote on The Guardian article on this meta-study by US Right to Know staffer Carey Gillam:

1. I’ve written previously about Carey Gillam, the author of The Guardian article that reignited public interest in these studies. Gillam is a former Reuters reporter who now works as the ‘research director’ for US Right to Know, an organic industry-funded anti-biotechnology group that has been in the headlines for its repeated attacks against agricultural biotechnology, university scientists and science communicators. She recently wrote a book claiming to document the alleged dangers of glyphosate.

 

Geoffrey Kabat is a cancer epidemiologist and the author of Hyping Health Risks: Environmental Hazards in Daily Life and the Science of Epidemiology and Getting Risk Right: Understanding the Science of Elusive Health Risks. He is a GLP board member. Follow him at @geokabat. Disclosure: I have no financial involvement with Monsanto/Bayer or any other conflict-of-interest related to this topic.

Genetic Literacy Project Disclosure: The GLP nor any of its staff has any financial involvement with Monsanto/Bayer nor any other conflict-of-interest related to this topic.

{{ reviewsTotal }}{{ options.labels.singularReviewCountLabel }}
{{ reviewsTotal }}{{ options.labels.pluralReviewCountLabel }}
{{ options.labels.newReviewButton }}
{{ userData.canReview.message }}
screenshot at  pm

Are pesticide residues on food something to worry about?

In 1962, Rachel Carson’s Silent Spring drew attention to pesticides and their possible dangers to humans, birds, mammals and the ...
glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists
glp menu logo outlined

Get news on human & agricultural genetics and biotechnology delivered to your inbox.