Viewpoint: Should you be concerned when you read that a chemical in your food has been linked to cancer? Here’s an epidemiologist’s checklist to detect over-hyped scares

During the outbreak of the SARS-CoV-2 pandemic, epidemiology ha assumed an outsized role in the public consciousness. It was pervasive in the news, as scientists struggled to elucidate both how the virus was being transmitted as well as its likely short- and long-term health effects.

Its high profile during the crisis harks back to what epidemiology was in its “heroic” period over the prior century-and-a-half, when disease scourges were a part of everyday life. The rise of epidemiology began in the 1850s led by John Snow, widely considered the ‘father of epidemiology,’ who demonstrated that contaminated water and not “miasma” was the cause of cholera.

John Snow. Credit: History Extra

Triumph after triumph followed: identifying cigarette smoking as the cause of lung cancer and many other diseases; the eradication of smallpox; pinpointing the handful of risk factors contributing to heart disease; and unraveling the role of human papillomavirus as the cause of cervical cancer. These breakthroughs were made possible by use of the epidemiologic method to address public health crises of global dimensions.  

Slipping influence and the politicization of science?

In recent decades, epidemiology has become more specialized, with researchers focusing on the social determinants of health, “life-course epidemiology,” and the contribution of specific genetic variants to the risk of particular diseases, as well as the study of “gene-environment interactions.”

The facet of epidemiology, however, that gets the most attention nowadays is what has been referred to as “risk factor epidemiology.” Typically, studies of this kind isolate a specific factor – a component of diet or exposure to a chemical (often at trace levels)—and examine its association with a specific disease. When the results of a study are reported in the media, we get the message that “factor X is associated with increased risk of disease Y.” Unlike the breakthrough findings that established the critical importance of epidemiology, this type of study speaks directly to the public via the media, as if the results have direct relevance to our health. As a consequence, it has become a staple of our daily news feed. 

For example, a 2021 study, by researchers at Washington University and Harvard, found that compared to individuals who drank sugar-sweetened beverages less than once a week those who consumed these beverages two or more times per day experienced a doubling of the risk of early-onset colorectal cancer. This result has potential interest because colorectal cancer, which is mainly a disease of older age, has been occurring more frequently in younger people, i.e., below age 40. But for many reasons, the result of the study is far from telling us that scientists have uncovered a cause—let alone an important cause—of this new phenomenon.

Human beings are so constructed that when we read such a news story, we immediately take the highlighted finding as if it were true and worry that something we had considered relatively harmless—drinking sweetened beverages— could be adversely affecting our health or the health of people close to us. People who have an interest in publicizing this type of result (scientist authors, journal editors, and journalists) know that stories like this provoke a visceral reaction. The logic behind this kind of reporting could be distilled down to the injunction, “Start worrying—details to follow.” 

Here is what one of many news reports made of the story.

Seems scary, based on the headline and the highlighted result. In fact, however, epidemiologists are well-aware that results like the one pertaining to early-onset colorectal cancer are merely hints that need to be confirmed or rejected with larger and more in-depth studies, probing this and many other factors in the context of what is known about the causes of colorectal cancer, which is the third most common cancer in men and women. But the paper and the headlines it evoked don’t convey that nuance.

Few people are in a position to fully realize how many questions, qualifications, and complexities lie behind the bald statement of results like this. Below, I list some of the most important questions that need to be asked when we encounter the results of a study that appears to be drawing our attention to a novel factor that may be affecting our health. Use of a checklist like this can provide a needed corrective to the many preliminary results that get into circulation, creating a fog of unvetted and often misleading information.

  • What is the finding/main result?
  • How strong is the result? How consistent is the result within the study? 
  • Does the result pertain to the whole study population, or is it only present in a subgroup?
  • How many different factors did the researchers examine? (Is the finding the result of “multiple comparisons”/“data dredging”?)
  • How good are the data from the study? How well did the researchers measure the study factor?
  • Could the result be due to confounding, i.e., could some other factor that is correlated with the study factor be the true cause?
  • Is there evidence of a dose-response relationship?
  • Do the authors report the “absolute risk,” as well as the relative risk? (“Absolute risk” conveys the population impact of the factor, i.e., how many cases of the disease would be prevented if the risk factor were eliminated). Reporting only the relative risk often makes the factor look more important than if one reported the absolute risk. 
  • How does the result accord with the results of other studies? 
  • How does the result comport with what we know about other established risk factors?
  • Do the authors acknowledge the limitations of the data?
  • Do they put their results in perspective?
  • Keep in mind that association does not prove causation.
  • Although authors often point to a biological mechanism that could explain their result, we should keep in mind that one can always find a possible biological mechanism to explain a given association.
  • Is this result something that we should pay any attention to?

Space limitations prevent us from examining all of the claims in the sweetened beverage paper. But let’s look at a few key points. First, as can be seen in the table below, the number of cases in the different levels of soda intake is quite small, especially at the two higher intake levels (14 and 16 cases), and, as a result, the risk estimates are unstable and imprecise.  

Second, a 2-fold relative risk sounds impressive, but we should note that a doubling of the risk is actually a modest increase. 

Third, and more important, if we look at how many cases of early-onset colorectal cancer can be attributed to a relatively high intake of sugary beverages, we see that the incidence rate among those drinking these beverages less than once per week was 8.4 cases per 100,000 (45 cases/536,446 person-years) and among those drinking 2 or more sugar-sweetened drinks per day the rate was 11.6 cases per 100,000 (16 cases/138,469 person-years). 

In other words, if drinking sugary beverages does in fact increase the risk of early-onset colorectal cancer, it results in an extra 3 cases per 100,000 population. This is the “absolute risk” or “risk difference,” which indicates the population impact of the hypothesized risk factor. 

Finally, the role of intake of sugar-sweetened beverages in the development of colorectal cancer generally (not just in younger people) has been investigated extensively, and it has generally not been found to be associated with increased risk. 

At the very least, our examination of these few, but important points, raises doubts about the hypothesis that the intake of sugar-sweetened beverages accounts for the observed increase in colorectal cancer among younger people.

In spite of the very weak evidence and the many questions regarding the results, the authors concluded, “In this large prospective US cohort study of younger women, higher SSB consumption in adulthood and adolescence was associated with a substantially higher [emphasis added] risk of EO-CRC,” and that, “Our findings add unique epidemiologic evidence [emphasis added – GK] that SSB intake may partly contribute to the rapid increase of CRC in younger adults.” 

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

When assessing weak and uncertain associations that might potentially be relevant to the risk of a serious disease, epidemiologists are practiced at the art of acknowledging the limitations of their study, while, at the same time, underscoring the potential importance of their findings. 

The very length of the checklist tells us just how carefully the results of a single study need to be scrutinized and placed in the context of what is known about a disease in question.

We also need to remind ourselves that well-done studies whose results don’t lend themselves to sensational headlines don’t tend to get media attention. “If it bleeds, it leads.” 

The triumphs of epidemiology and public health I mentioned at the beginning of this essay spoke for themselves. They didn’t need to be embellished by exaggerating their potential significance, by appealing to public concerns about harmful components of our diet, or providing environmental activists with ambiguous evidence that exposure to trace amounts of some chemical are having discernable health effects. 

Over the years, a number of prominent epidemiologists have criticized risk factor epidemiology and “approaches that abstract single elements … from the complexity of the life and times of people and relate these to a single health outcome.” (See also here and here). But such critiques have done little to dampen the appeal of focusing on small, questionable effects – a focus that one epidemiologist wrote “marginalizes us as a field.”

Once we realize how big the gulf is between the punchline of an epidemiologic study and the many links in the causal chain that need to be filled in, we can begin to take a more skeptical attitude toward results that are a staple of our media ecosystem and that, for the most part, provide little by way of actionable information to improve our health and well-being. 

Geoffrey Kabat is cancer epidemiologist and the author of Hyping Health Risks: Environmental Hazards in Daily Life and the Science of Epidemiology and Getting Risk Right: Understanding the Science of Elusive Health Risks. He can be found on Twitter @GeoKabat

This article previously ran on the GLP June 16, 2023.

Can coffee drinking prevent COVID infection?

image

Coffee has undergone a dramatic rehabilitation since it was designated as a “possible” bladder carcinogen by the International Agency for Research on Cancer (IARC) in 1991.

In the past twenty years, large, prospective epidemiologic studies have pointed to an impressive but still only suggestive reduction in the risk of developing numerous types of cancer associated with drinking coffee. After having a brief spell as something to be shunned, coffee appears to have potential health benefits. 

Now, two recent papers suggest that coffee may provide protection against COVID-19 infection. It’s creating headlines across the internet. How solid is the evidence?

image

Researchers in Taiwan reviewed biochemical evidence that compounds present in coffee inhibit infection with SARS-CoV-2. They also reported the results of a small study indicating that drinking 1-2 cups of coffee per day “is sufficient to inhibit infection of multiple variants of SARS-CoV-2 entry, suggesting coffee could be a dietary strategy to prevent SARS-CoV-2 infection.”

The Taiwan paper published this year cites a paper by researchers at Northwestern University, which used data from the UK Biobank’s prospective epidemiologic study of roughly half a million participants (aged 37 to 73) to examine the association of dietary intake with the risk of COVID infection.   

The authors reasoned that factors including coffee, tea, red meat, fish, vegetables, and fruits, might influence immune function and, thereby, affect the risk of COVID infection and its sequelae. 

Information on lifestyle and dietary behaviors, as well as physical measurements and blood sampling was obtained at enrollment into the study (2006-2010). Individual COVID-19 exposure was estimated using the UK’s monthly positive case rate for specific geo-located populations. (It should be noted that only about 10 percent of study participants were tested for COVID-19 during the study timeframe.) 

Few associations were seen with dietary factors. However, coffee intake showed a modest inverse association with COVID infection — that is, compared to people who did not drink coffee daily, those consuming 1, 2-3, or >4 cups per day had roughly a 10 percent reduction in risk of COVID infection, as shown in the table.

image

Summarizing their results in the conclusion section of the abstract, the authors wrote:

In the UK Biobank, consumption of coffee, vegetables, and being breastfed as a baby were favorably associated with incident COVID-19; intake of processed meat was adversely associated.

Two things jump out immediately at any epidemiologist looking at the results for coffee. First, the reduction in risk associated with coffee drinking is small — on the order of 10 percent. As such, this result could be due to problems recalling one’s diet or to confounding by some other factor. Secondly, and more striking, there is no suggestion that drinking more cups of coffee further enhances the protection against infection.

This is what is referred to as a dose-response relationship, and, while the absence of a dose-response relationship does not rule out the possibility of a causal association, its presence increases one’s confidence in a possible causal association.

At the very least one would expect the authors to comment on these obvious features of their data. To gloss over the lack of a dose-response relationship is disingenuous. 

Moreover, in the abstract and in the discussion section of the paper, where one would expect them to comment on these noteworthy aspects of the coffee data, the authors skip over these issues. They give an obligatory nod to the “limitations” of their study, but manage to avoid stating the obvious conclusion — that their data do not offer clear support for an association of coffee-drinking or other dietary factors with a risk of COVID infection.

In their discussion, they recapitulate,

Habitual consumption of 1 or more cups of coffee per day was associated with about a 10% decrease in the risk of COVID-19 compared to less than 1 cup/day.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Rather than commenting on what this finding might mean, they launch into a discussion of the general background that suggests the possibility of an inverse (i.e., protective) association:

Coffee is not only a source of caffeine, but contributes dozens of other constituents; including many implicated in immunity. Among many populations, coffee is the major contributor to total phenol intake, phenolic acids in particular. Coffee, caffeine, and polyphenols have antioxidant and anti-inflammatory properties.

The authors go on at length in this vein. 

There is a second, surprising omission in this study. In addition to over-stating — and under-analyzing — the association of coffee with risk of COVID infection, for some reason, the authors ignore several obvious exposures/behaviors with well-studied effects on health and potential effects on immune status, namely: smoking, alcohol consumption, physical activity, and body mass index (BMI). We know that they have information on these factors since they controlled for some of them in their analyses. 

These glaring issues should have been flagged in the peer review process, and the authors should have been required to address them before the paper was accepted for publication. 

This paper reminds us of several realities surrounding research into lifestyle, dietary, and environmental factors that affect — or might affect — our health.

There is enormous interest in these sorts of topics among the public at large, and therefore among journalists, and even scientists themselves. These are legitimate and potentially important questions for scientists/epidemiologists to address.

First, we have to keep in mind that many reasonable and highly appealing hypotheses fail to stand up when examined with high-quality data.

It is also important to keep in mind that early studies are often “quick and dirty,” that is, crude, and are often based on small populations. Often, they were not designed to address the question under investigation. Additionally, researchers can give undue weight to a borderline result that appears to support their hypothesis. 

Finally, even when the findings are extremely weak, there is often a tendency to “leave the door open” or even give the most positive/optimistic gloss on the finding, rather than coming out and saying that the data from our study do not provide particularly strong support for the notion/hypothesis — in this case, that coffee drinking reduces the risk of COVID infection.

That is what I believe happened in this study. The authors of the UK Biobank paper report their results, carefully eschewing precision, trying to have it both ways. They imply that their data on coffee convey a meaningful signal, while avoiding subjecting the data to rigorous examination. They try to paper over the yawning gap in the evidence by referring to biochemical effects of compounds present in coffee, and, at the same time, conceding that their findings “warrant independent confirmation.”

If a “medicine” as cheap and as easily available as coffee was in fact capable of reducing or eliminating the risk of infection with SARS-CoV-2, think of the enormous implications for bringing to an end, or curtailing, an evolving pandemic, which to date has caused an estimated 7 million deaths worldwide. Unfortunately, much more powerful and rigorous studies are needed before this is anything more than an appealing hypothesis. 

Geoffrey Kabat is a cancer epidemiologist and the author of Hyping Health Risks: Environmental Hazards in Daily Life and the Science of Epidemiology and Getting Risk Right: Understanding Science of Elusive Health Risks. Find Geoffrey on X @GeoKabat

Genomic scars: How centuries of surviving antisemitism has shaped Jewish genetics

Antisemitism persists, like a stubbornly metastatic cancer.

On January 6, 2021, at the attack on the U.S. Capitol, insurrectionist Robert Keith Packer wore a hoodie emblazoned with “Camp Auschwitz.” Below it read “work brings freedom,” the statement that met prisoners arriving at the death camp.

Synagogues are still threatened, with security beefed up around the United States. Certain celebrities are overtly antisemitic or post social media links to antisemitic films, seemingly too ignorant to recognize meaning or context. It is subtle, too, for “inclusivity” this time of year does not seem to apply to the diversity of religions, and especially to choosing not to identify with one.

The Anti-Defamation League defines antisemitism as “hatred toward Jews.” The organization changed “anti-Semitism” to one word with lower case “s” because the capital “S,” coined by a German historian in 1781, referred to a group of related languages originating in the Middle East, not ancestry or religion. But google and spellcheck have yet to catch up. I have to keep overruling the change as I write this.

It is unfortunate that I need to republish this article, updated slightly. The anniversary of Kristallnacht should bring chills to any group that is marginalized or the target of hate.

On November 9 and 10, 1938, Storm Troopers, Hitler Youth, and civilians rampaged through Nazi Germany. They shattered windows of more than a thousand synagogues, Jewish homes, and more than 7,000 businesses, and arrested some 30,000 Jews, cramming them into boxcars like doomed cattle and transporting them to concentration camps.

History books and the media chronicle the hatred and misplaced sense of superiority that fuels destruction of a people, like the remembrances of Kristallnacht. But evidence also lies in our genomes. That’s the case for the Ashkenazi Jews, whose ancestry traces back to Eastern Europe, not so very long ago. I am 100% Ashkenazi, my ancestors from what is now Ukraine.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

A series of population bottlenecks

Nazi Germany’s failed “final solution” left marks in our DNA, from genes to genomes. A striking example is the elevated frequency of mutations in the gene behind Canavan disease in the U.S., which traces back to two of the 250 or so souls who survived the massacre in the Vilna ghetto in Lithuania by running into the forest, in September 1943. I cover that story in my gene therapy book.

Genome-wide signals of past antisemitism come from farther back, especially from the near-extinction of the Jews during the Crusades. A population bottleneck echoes today in the genetic similarity among modern Ashkenazim.

A bottleneck is a term in population genetics that describes the near-decimation of a group, followed by restoration of numbers from a few individuals. The long-term effect is to narrow the gene pool, which amplifies persisting gene variants. Modern cheetahs in Africa illustrate a classic bottleneck, their near genetic uniformity reflecting population narrowing during the ice age that ended about 11,700 years ago. Poaching added to the diminishing genetic diversity.

The nature of the changeable DNA molecule enables researchers to estimate dates of past key genetic events. Differences in DNA sequence among modern human genomes provide a backward-ticking “molecular clock,” possible because genes mutate at known and measurable rates. The different mutation rates of different genes are based on the nuances of the DNA base sequence, which affects the likelihood of an error as DNA replicates, which can replace one DNA base with another. Mutation accumulation takes time – hence, the molecular clock of evolution.

The Ashkenazim have survived an undulation of population bottlenecks, a choreography of hatred that has serially strangled the diversity of our gene pool since our origins in the Levant (Egypt, Cyprus, Iraq, Palestine, Syria, Jordan, Lebanon, and Turkey) during Biblical times. The so-called Jewish Genetic Diseases are a legacy of repeated episodes of persecution. Any of these dozen or so conditions, from A (Alport syndrome) to Z (Zellweger syndrome), appears when an individual inherits two copies of the same recessive mutation from parents who shared a recent ancestor, like a great-grandparent.

After the Pittsburgh massacre, I searched for further info on the genomic scars of antisemitism, and quickly found a compelling report in PLOSGenetics. “The time and place of European admixture in Ashkenazi Jewish history,” from Shai Carmi of the Hebrew University and his colleagues, is from 2017. They used computational tools to expose how a severe population crash 30 or so generations ago, followed by an infusion of Eastern European DNA, brewed the Ashkenazi gene pool of today.

The devastating effect of the crusades

the crusades 1095-1291
Credit: Jewish Virtual Library

From 1095 through 1291, Jews, regarded as killers of Jesus Christ, faced choices: convert to Christianity, be killed, or commit suicide. Entire communities of Jews vanished. On a spring day in 1171 in Blois, France, for example, the entire, albeit small, Jewish part of the community was burned at the stake.

By the thirteenth century the small-scale murders of Jews had metastasized into mass killings. The fourteenth century saw wholesale evictions of Jews from Western Europe, and by the fifteenth century, very few were left in Germany and France. Many survivors dragged themselves in pitiful migrations to Eastern Europe.

Historical records are scant. Genetic evidence, also incomplete, compares modern human genomes, or their parts, against a backdrop of geographic origins and known migration patterns. The result is a glimpse of what might have happened, the narrative that fits the genetic evidence.

In a paper from 2014 in Nature Communications, Dr. Carmi and his team report the sequencing of the genomes of 128 modern Ashkenazi individuals. The investigators found clues to the past in extensive DNA markers and sequences that are identical and linked on the same copy of a chromosome – a sign of shared ancestry. Such “identical-by-descent” chromosome regions reflect a population bottleneck so severe, those 30 or so generations ago, that the most likely explanation is that at one time during the late-medieval period, only 350 Ashkenazim existed!

Credit: Shai Carmi

Instead of analyzing entire genomes, the experiments described in the 2017 report focused on parts that vary. The researchers compared 252,358 single-DNA-base sites (SNPs) in the genomes of 2,540 Ashkenazim, 543 Europeans, and 293 Middle Easterners.

The computational analysis points to interbreeding (“admixture”) between populations from the Middle East and Southern Europe about 30 to 35 generations ago. But then a severe bottleneck came, at the time of the Crusades, followed by an infusion of genomes from Eastern Europe. The Ashkenazi gene pool formed from the Jews chased out of western European during the Middle Ages. We’re descended from the rejects of a supposedly civilized humanity.

And so the genomes of today’s Ashkenazim share six times as much of the DNA sequence with Eastern Europeans as they do with Southern Europeans, making our color-coded maps on Ancestry.com so alike that they extensively overlap or even become superimposable.

Ricki Lewis has a PhD in genetics and is a science writer and author of several human genetics books. She is an adjunct professor for the Alden March Bioethics Institute at Albany Medical College and a member of ELEVATEGenetics Acceptable Thresholds Committee with the non-profit Center For Genomic Interpretation. Follow her at her website or Twitter @rickilewis

This is a revised version of a piece that appeared on the GLP on October 2, 2020.

Sudan connection: Are Ethiopian Jews descendants of the ancient Israelites?

The conventional theory among historians today attributes the origin of the Ethiopian Jews to a separatist movement that branched out of Christianity and adopted Judaism between the fourteenth and sixteenth centuries (e.g. Quirin, 1992a, 1992b; Shelemay, 1989; Kaplan, 1995). The theory essentially holds the Ethiopian Jews to be the descendants of indigenous non-Jewish Ethiopians, and their belief in ancient Jewish descent to be just a matter of myth and legend. Proponents of the theory have been praised for being “thought-provoking” (Waldron, 1993) and for “demythologizing” (Gerhart, 1993) the history of the group. Consequently, scholars, and historians in particular, have been steered to ignore the compelling evidence for the ancient origins of the group.

I will present the historical evidence which, with the support of crucial genetic findings, strongly suggests that today’s Ethiopian Jews are the descendants of an ancient Jewish population. This study reinforces recent reviews of the DNA studies of the Ethiopian Jews (Entine, 2007) that have already pointed to major flaws in the traditional historical perspective. Furthermore, the latest research further suggests a strong historical affiliation between the Ethiopian Jews and Northern Sudan that is little discussed in literature. The paper analyzes the history of the Jews of Ethiopia in context of their peripheral geography in the Lake Tana area and the Semien.

The Beta Israel

Until they were forced to leave Ethiopia in the 1980s, Ethiopian Jews lived in small villages scattered in the northwestern region of the Ethiopian plateau around Lake Tana and in the Semien mountains area. They traditionally referred to themselves as the Beta Israel, and were referred to by other Ethiopians as Falasha, meaning “strangers” in the indigenous Semitic language Ge’ez. Thus, the term Beta Israel will be used throughout this article to label the community.

The community has venerated the Old Testament of the Ethiopian Bible and its religious language has been Ge’ez. Today, the Beta Israel show closest resemblance in external cultural characteristics to their surrounding Habash, i.e. the ethnic category that encompasses the Amhara and Tigray-Tigrinya populations. And although both the Habash-Christians and the Beta Israel claim royal descent from the time of King Solomon and Queen Sheba, an important difference exists (Entine, 2007, p.148-9). While the Christians claim descent from King Menelik—the offspring of Solomon and Sheba in Ethiopia—the Beta Israel claim descent from first-generation Israelites from the tribe of Dan who some believe accompanied Menelik as guards of honor.

To start with, the geographical definition of Ethiopia in historical sources must be addressed for it has distorted major studies on the history of the region. It wasn’t until recently that scholars realized that the name Ethiopia, in ancient and medieval sources, denoted the Nile valley civilization of Kush, also known today as ancient Nubia, in what is today Northern Sudan. On the other hand, the geographical area that encompasses the modern country of Ethiopia had in the past housed the ancient kingdom of Aksum, which developed in the northern parts of the plateau, and was sometimes referred to as Abyssinia. It is also worth mentioning that all of the Biblical, and a significant portion of the ancient, references to Ethiopia, or Kush, predate the establishment of Aksum in the first century CE.

As I have argued in a former paper (Omer, 2009a), analyzing the history of the Beta Israel within the boundaries of the contemporary country of Ethiopia is a problematic approach. That is because the political boundaries of the modern day countries of Sudan and Ethiopia were only defined towards the early twentieth century. However, even after the boundaries were specified, the Beta Israel settlements remained at the periphery and far from the interior of today’s Ethiopia, which is close to the western border region with Northern Sudan.

The political boundaries between the two states had remained, for the longest part of history, fluid and undefined in many areas. It was mostly the twentieth century borderline that defined the contemporary identity of the Beta Israel population as Ethiopian, and distinguished them from the populations of the flat plains of the Sudan, to the west. In other words, the Beta Israel have always represented a periphery population that, in the context of history, can never been seen as integral element of today’s Ethiopia.

Theories of history

Before proceeding further, I will present a brief overview of the hypothesis that the Beta Israel emerged out of Ethiopia’s Christianity. The hypothesis is best argued by Quirin (1992a) and Kaplan (1995). Quirin’s argument is based on the premise that the Beta Israel identity has “emerged out of a differential interaction with the Ethiopian state and dominant Abyssinian society” (1998, p. 1). Kaplan (1995) follows the same line of argument and concludes that “their ‘Judaism,’ far from being an ancient precursor of Ethiopian Christianity, developed relatively late and drew much of its inspiration from the Orthodox Church” (p. 157). They essentially argue that the religious substance of the Beta Israel has been adapted from the Jewish character already found in Ethiopia’s Christianity. Salamon (1999) also emphasizes the Christian roots of the Beta Israel, yet she leaves the question of the group’s actual origins open to question. She argues that they “constructed their identity in reference to their Christian neighbors, rather than to a Jewish ‘other’” (p. 4).

Lack of neutral analysis to the shared similarities between the religious traditions of the Beta Israel and that of the Ethiopian Church has been a major problem in the study of the group. Having said this, it should be noted that Ethiopian Christianity appears to be more influenced by Judaism than vice versa. Gatatchew Haile (as cited in Tibebu, 1995, p. 11) states:

Only a Christianity of a nation or community that first practiced Judaism would incorporate Jewish religious practices and make the effort to convince its faithful to observe Sunday like Saturday. In short, the Jewish influence in Ethiopian Christianity seems to originate from those who received Christianity and not from those who introduced it. The Hebraic-Jewish elements were part of the indigenous Aksumite culture adopted into Ethiopian Christianity.

Also, as Kessler (2012) points out, the theory does not indicate how the Beta Israel gained intricate comprehension of Jewish material encompassing pre-rabbinical principles based on the books of Enoch and Jubilees. In essence, the sum of Christian influences in Beta Israel religious traditions, which we should take great care not to exaggerate, may reflect nothing much more than the struggles of a Diaspora Jewish community at preserving whatever was left of its steadily vanishing heritage. Poverty, instability, persecution, and illiteracy across the centuries would have certainly caused the loss of any significant authentic Jewish scriptures. However, this should not deny the survival of the fundamental aspects of the Jewish identity and religion among the Beta Israel.

Moreover, a basic question that Kaplan (1995) and Quirin (1998) do not address is: Why would Christians become Jews? As Teferi (2013) explains, converting to Judaism requires the abandonment of the essence of Christianity (p. 179), which makes a conversion unlikely. The only reasonable suggestion, Teferi indicates, as to why Christians would have become Jewish is if they were forced to (p. 180) convert, which is also unlikely.

Worth mentioning are the accounts of the sixteenth century Christian monks, Abba Saga and Abba Sabra, which have been of major importance to proponents of the traditional Christian origin theory. Quirin (1992a) as well as Kaplan (1995) present the two monks as central figures in establishing the religious institutions of the Beta Israel. This is based on the idea that the monks must have been responsible for introducing monasticism and the clergy system found within the Beta Israel traditions.

This aspect of the argument is problematic for three reasons. First, there is nothing concrete to confirm the historicity of the two monks. Second, a different sixteenth century report suggests that many Beta Israel practiced monasticism prior to the alleged arrival of the two monks (Teferi, 2013, p. 185). The report refers to Jews from Abyssinia (or Aksum) and “their books, their priests and their monks” (as cited in Teferi, 2013, p. 185; Norris, 1978). Third, as stated by Kessler (2012), “Quirin tends to confuse the basic tenets of the Beta Israeli Falashas with the rites and forms of their practices” and “does not appreciate that the religion ‘as it has come down to us’ [as cited in Quirin, 1992a, p. 68]. In essence this means a belief in the oneness of the Almighty, based on the teachings of the Torah (Orit), and a faith in the coming of the Messiah,” which “more than ritual differences – important though they are – distinguish Judaism from Christianity and could not conceivably have been invented by the rebel monks” (Preface to Third, Revised, Edition section).

The Beta Israel and Kush (Northern Sudan)

In order to trace the origin of the Beta Israel, we must start in Northern Sudan, where the oldest evidence for Jews in the Horn of African area points. This is not only important because of the geographical relevance between the Beta Israel and Kush, but also because of the immense evidence that link between the two.

The history of the Beta Israel in context of their neighboring ancient land of Kush, in what is today Northern Sudan, has surprisingly not been the subject of a serious investigation. Kessler (2012) is one of very few contemporary scholars who has attempted to elaborate on the connection between the Beta Israel and Kush, and has recognized the tendency among scholars “to underrate the significance of the impact of ancient Egypt and of Nubia or Meroë” (Preface to Third, Revised, Edition section) on development of the African Horn region.

References to Kush appear in the Bible as well as in extra-Biblical narratives and traditions. The Bible mentions directly the presence of Jews in Kush; the book of Zephaniah states (note that Ethiopia in ancient sources referred to Northern Sudan, not to the modern country of Ethiopia): “From beyond the rivers of Ethiopia [Kush], My worshipers, My dispersed ones, Will bring My offerings” (3:10, New American Bible). Psalm 87:4 and Isiah 11:11 make the same indication.

Also, the Bible identifies a number of important Biblical characters as Kushite. In Numbers (12:1), the wife of Moses is claimed to have been a Kushite. Zipporah, Moses’ wife known by name, is commonly described in Biblical traditions as being of Kushite ancestry. In Ezekiel (Exagoge: 60-65) Zipporah tells Moses about her motherland:

Stranger, this land is called Libya [Africa]. It is inhabited by tribes of various peoples, Ethiopians [Kushites], black men. One man is the ruler of the land: he is both king and general. He rules the state, judges the people, and is priest. This man is my father {Jethro} and theirs.

The Midrash Book of Jasher (Hapler, 1921), provides a detailed account of Moses’ journey to the southern kingdom of Kush; including how he gained the admiration of its people (p. 132):

Twenty-seven years old was Moses when he began to reign over Cush [Kush], and forty years did he reign. And the Lord made Moses find grace and favor in the sight of the children of Cush, and the children of Cush loved him exceedingly.

What makes Kush central to the study of the Beta Israel is that it has defined the groups’ identity until medieval times. Rabbi David ben Solomon ibn Zimra, the sixteenth century Chief Rabbi of Egypt, whose acknowledgment of the Beta Israel as rightful Jews has later been of significant importance in their recognition by the world Jewry, states: “those [Jews] who came from the Land of Cush are without doubt of the tribe of Dan…” (as cited in Lenhoff & Weaver, 2007, p. 303).

The identification of the Beta Israel with Kush is best illustrated in the writings of Jewish scholar and traveler Eldad Ha-Dani in the ninth century. Eldad identifies himself as being a citizen of an independent Jewish state “beyond the rivers of Cush” (Hapler, 1921, p. 49). He also identified himself as being of an Israelite origin from the tribe of Dan, and thus his last name Ha-Dani. Eldad’s geographical affiliation and identification with the tribe of Dan strongly corroborate with what is known of the Beta Israel (Omer, 2009b; Epstein, 1891; Schindler & Ribner, 1997, p.2; Teferi, 2013, p. 188-9; Schloessinger, 2009, p. 1-9).

Moreover, Eldad’s description of the ethnic groups and geography of the African Horn region appears to be fairly legitimate and reinforces the historicity of his accounts (Borchardt, 1923-1924). In the course of his narrative, he elaborates on the migration story of his Israelite ancestors—the tribe of Dan—from the time when they left the “land of Israel” (Hapler, 1921, p. 52), passed through Egypt, and finally settled down in “the land of Cush”(p. 53). He states: “The inhabitants of Cush did not prevent the children of Dan from dwelling with them, for we [the children of Dan] took the land by force” (p. 53). The fact that Eldad identifies his people with Kush, and not with Aksum, demonstrates the very strong historical identity bond that ties between the Beta Israel and Northern Sudan. And although scholars commonly view the affiliation with the tribe of Dan in context of other world myths about the ten lost tribes of Israel (e.g. Segal, 1999; Schwartz, 2007, p.1x), some scholars have proposed an actual Samaritan origin for the Beta Israel on basis of a variety of religious and linguistic evidence (Shahîd, 1995, p.94-5; Leonhard, 2006, p.39-42). Hence, tribal Israelite descent amongst the Beta Israel is not unlikely.

Another important twelfth century Jewish traveler, Benjamin of Tudela, writes about independent Israelite cities in mountains in Eastern Africa—in a clear reference to the Beta Israel settlements in the Semien mountains—from which the inhabitants “go down to the plain-country called Lubia [or Nubia]” (Benjamin as cited in Kaplan, 1995, p. 50). Though Benjamin does not refer to Kush, his uses the more medieval name of Northern Sudan, Nubia.

A third, no less significant source, is the fifteenth century scholar, Obadiah ben Abraham of Bertinoro, who discusses trade relations between the Beta Israel and Kush: “They believe them-selves to be descendants of the Tribe of Dan, and they say that the pepper and other spices which the Cushites sell come from their land” (as cited in Abrahams & Montefiore, 1889, p. 195).

Furthermore, archeological findings indicate strong communication ties and travel between Aksum and Kush, starting from the early stages of the Aksumite kingdom (Fattovich, 1975, 1982; see also Phillipson, 1998, p. 24). As mentioned previously, the civilization of Kush predates that of Aksum with more than fifteen hundred years. While Kush was already a flourishing kingdom by 1700 BC, and rose as a Mediterranean empire in the eighth century BC, Aksum did not emerge as a recognizable kingdom until the first century CE. Although the archeology of pre-Aksum, in what is today Ethiopia, reflects a dominant South Arabian culture, the later archeology of Aksum shows stronger and direct cultural influences from Kush (Pirenne, 1967; Fattovich, 1982, 1975).

Furthermore, during the Meroitic period, when Kush was centered at Meroe, from 270 BC- 400 CE, the Kushites conducted extensive building activities east of the Nile. Starting from around the first century, constructions included numerous hafirs and other water sources (Welsby, 1998, p. 128), which indicates the intensification of human movement between the kingdom of Kush and with the, by then, emerging kingdom of Aksum. It should also be noted, in this regard, that Aksum’s only direct-land access to the Mediterranean was through the Sudan.

Kessler (2012) has identified a number of traditional Beta Israel crafts and production practices that are historically associated with the people of Kush; those are domestically made pottery, cotton weaving, basketry, and leatherwork. More importantly is the traditional role of the Beta Israel as ironsmiths, which corresponds with Meroe’s distinctive role in ancient history in the discovery and manufacture of iron. Being rich in ore, the Kushites employed iron in all the different industries of the kingdom including agriculture (Asante, 2012, p. 96).

Given all these indications, the possibility that the Jews who entered the kingdom of Aksum, which did not mature until the first century CE, came from Kush becomes likely. Even though such speculations seem to still fall short of explaining the exceptional affiliation between the Beta Israel and Northern Sudan in medieval references, they pose crucial inquiries regarding the origin of this group.

Finally, in context of our search for the historical, and possibly genealogical, connections between the Beta Israel and Northern Sudan, an important point regarding the physical features of the group must be made. Contrary to what is commonly assumed, and as I stressed formally in an essay (Omer, 2012, p. 1), the Beta Israel do not really look like their surrounding non-Jewish Ethiopians. As a Northern Sudanese myself, I was able to notice that most of the Beta Israel look closer to the people of Northern Sudan in physical appearance, than they do to the non-Jewish Ethiopians. In other words, although a non-Jewish Ethiopian can easily be distinguished from a Northern Sudanese by looks, no distinction can mostly be drawn between an Ethiopian Jew and a Northern Sudanese. The majority of them do not look like the non-Jewish Ethiopians.

On the other hand, and despite the expected difficulty in distinguishing East African specific physical features for people from other places, Western scholar, Leslau (1979), who visited Ethiopia in the 1940s was able to notice some “facial traits” (p. xii) among the Beta Israel, which he mistakenly associates with the stereotypical look of the “Oriental Jew”.

The Beta Israel and Aksum

The traditional Christian origin theory argues that the contemporary Beta Israel developed separately from the ancient Jews of Aksum. Quirin (1992b) states: “The ‘Falasha’ emerged as an identifiable, named group during the period from the fourteenth to the sixteenth century” (p. 203); in agreement, Kaplan argues: “Nothing in the written sources can be interpreted as reliable evidence for the survival of a distinct well-defined Jewish community in Ethiopia for the period from the seventh to the 14th century” (1995, p. 55).

However, historical evidence strongly contradict Quirin’s and Kaplan’s conclusion. To start with, it has been confirmed with certainty that Judaism in Aksum predates the introduction of Christianity, in the fourth century. Kaplan (1995) admits that “the linguistic evidence would seem to clearly indicate that Jewish influences in Ethiopia were, at least in part, both early, i.e., Aksumite, and direct” (p. 19). Linguistic evidence found in translations of Biblical material into Ge’ez, has already shown that a Jewish society had entered Aksum sometime between the first and fourth centuries (Kaplan, 1995, p. 13-20). As a result of this, the influence of Judaism in the Ethiopian Orthodox church has been overwhelming and has no counterpart in the contemporary Christian world. Traditions including circumcision on the eighth day of birth (Ullendorff, 1956), the historical upholding of the Saturday Sabbath (Ullendorff, 1968, p. 109-13), the architectural division system of the Ethiopian church that mimics Solomon’s Temple (Ullendorff, 1968, p. 87-97), as well as a diversity of other features, testify to a powerful former Jewish culture.

Additional evidence comes from the sixth century reign of Kaleb, the fervently Christian king of Aksum who adopted vigorous policies to convert the non-Christian inhabitants of the kingdom. In the early decades of the century, he restored Christianity to South Arabia by defeating its Jewish king. Unfortunately there are no sources to elaborate on his domestic policies towards the Jews; however, a few but significant sources provide crucial indications for the origins of the Beta Israel settlements in the Lake Tana area and the Semien. Some of these sources mention that Kaleb had two sons; one Gabra Masqal and another Beta Israel (Kaplan, 1995, p. 39; Getatchew Haile, 1982). Beta Israel is said to have unsuccessfully attempted to deter Gabra Masqal’s path to the throne. In this account, it seems that the name Beta Israel was used as propaganda to symbolize the disobedience of the Jews to conversion. (Of course the idea that Kaleb himself was a Beta Israel is also subject to speculation).

Also, during the sixth century, the Alexandrian traveler Cosmas Indicopleustes reports on military conflicts between an Aksumite king (probably Kaleb) and enemies in the “Semenai [Semien]”(McCrindle, 1897, p.67); this may indicate that Jewish resistant movements were already being clustered in the indicated area. Another reference from medieval Abyssinia speaks about events taking place around the sixth century, and mentions the name “Falash[a]” (as cited in Kaplan, 1995, p. 39; also see Varenbergh, 1915-16).

By the mid sixth century, Aksum began to decline and struggled to control its northern and peripheral territories. By the late decades of the century, the frontier regions to the north and west of Lake Tana were mostly independent and may have already been inhabited by Jews. From the seventh to the fourteenth centuries, the mentioned areas remained isolated from Aksum, which seemingly explains the decline in references to Jews in Aksumite sources.

Eldad’s reference to his independent Jewish state “beyond the rivers of Cush” (Hapler, 1921, p. 53) during the ninth century seems to point to the same Lake Tana area of the Beta Israel. Writing in the twelfth century, Benjamin of Tudela appears to corroborate Eldad’s account by mentioning independent Israelite cities in Eastern Africa: “And there are high mountains there and there are Israelites there and the Gentile yoke is not upon them” (as cited in Kaplan, 1995, p. 50). It appears that by “mountains” Benjamin is referring to the Semien.

Finally, the Jewish queen Judith, who is deeply situated in Ethiopian history and traditions, is described as coming from a region “west” (Trimingham, 1952, p. 52) of Aksum and ruled Aksum sometime in the late ninth or tenth century, is yet another strong indication of the presence of Jews in this region prior of the fourteenth century. The Christian Zagwe dynasty that succeeded Judith to the throne and ruled until the late thirteenth century is widely described as being of Jewish roots (Briggs, 2009, p. 18). Judith’s possible Jewish background is further enforced by the fact that the dynasty governed from Lasta, which has been a vigorously Jewish area.

Given the various references for Jewish presence in the region across the centuries, there appears to be no persuasive reason to assume that the Beta Israel have emerged as recently as the fourteenth to sixteenth centuries. It is also very unreasonable to suggest that all such historical references to Jewish presence in the designated regions, which greatly correspond with the historical and cultural context of the contemporary Beta Israel, are coincidental.

Genetic evidence

In recent years, DNA studies have shown the conventional historical theory endorsed by Kaplan (1995) and Quirin (1992b) to be very much unreliable. An in-depth analysis of genetic research by Entine (2007, p. 149) states:

The Falasha may have been a rump group that remained true to its historical roots when the Ethiopian king converted to Christianity in the fifth century. For centuries, the Black Jews maintained separate traditions from their Christian countrymen. While most Ethiopians ate raw meat, drank heavily, and rarely washed, the Falasha cooked their meat and were scrupulously sober and relentlessly hygienic.

Although the precise relationship between the ancient Jews of Aksum and the contemporary Beta Israel community has not been clearly understood by geneticists, studies have already confirmed some historical continuity within the group (Entine, interview, July 7, 2013). Thus, the widely accepted theory among geneticists, as proposed by Entine (2007) and supported by research and subsequent studies (Saey, 2010; Ostrer, 2012), suggests that the group was formed as early as the fourth to sixth centuries period. The fact that studies found the Beta Israel to be genetically so diverged from other Jewish communities (e.g. Lucotte & Smets, 1999) may suggest that the group was initiated by Jewish settlers who converted a majority of local people to Judaism more than two thousand years ago (Begley, 2012). Accordingly, Entine (2013) concludes, “That would mean that Ethiopian Jewry predates Ashkenazi Jewry” (interview, July 7); however, this does not necessarily suggest that the Beta Israel descent can be specifically traced to the Fertile Crescent.

Moreover, despite the fact that the genetic distance between the Beta Israel and other contemporary Jewish groups is large, studies have already cited cases of Ethiopian Jews with genetic markers common in other Jewish populations (Hammer, et al., 2000).

The wide range of historical evidence available on the Beta Israel points to the survival of an ancient Jewish heritage within the group. Furthermore, sources suggest the establishment of the Beta Israel settlements in their contemporary Lake Tana locations to be much earlier than the fourteenth to sixteenth centuries period some historians have suggested. In addition, the African ethnicity of the Beta Israel appears to be more complicated than just Ethiopian, and seems to reflect a strong Northern Sudanese element as corroborated by the wide range of mentioned historical observations and the peripheral geography of the population.

We need more historical investigation on the origin of the Beta Israel that is not dictated by concepts of the conventional historical theory. Recent genetic studies have already confirmed an ancient heritage within the contemporary Beta Israel population. Hence, the approach suggested by the traditional historical theory in simplifying the origins of the group to local and former-Christian converts is neither geographically persuasive nor convincing from a historical point of view.

Ibrahim M. Omer is a Research Assistant at California State University Monterey Bay, Visual & Public Arts Department, Museum Studies Program.

View a complete list of REFERNCES for this article.

This article previously appeared on the GLP Jul 22, 2013.

Viewpoint: Anti-GMO activists, from Organic Consumers Association to Joe Mercola to Vandana Shiva, formed an alliance. Why this is good news for biotech and science supporters

untitled design
Alliances and networks are the new game plan among anti-GM activists. Recently, Organic Consumers Association [read GLP profile] head Ronnie Cummins announced that his organization was joining with Joe Mercola at Mercola.com [read GLP profile], Navdanya (run by Indian philosopher and technology rejectionist Vandana Shiva [read GLP profile]), Organic and Natural Health Association and Regeneration International (itself an amalgam of hundreds of activist groups).

It’s a stunning development. All of these groups are notorious not just for anti-GMO, pro-organic activism, but for their many false claims that genetically modified (or gene-edited) crops pose health dangers. Mercola is known for his ‘health advice’ website that serves as a vehicle to sell supplements and “natural” cures based on alternative medicine that are far outside the mainstream. Shiva is more recently known for her activism and role in the economic, agricultural and political disaster that unfolded this year in Sri Lanka, which under her guidance banned agro-chemical fertilizers and many key crop pesticides in a since-abandoned plan to go all organic.

screen shot at amAll of them have dabbled in fringe science, especially in recent years, including vaccine rejectionism — not just of COVID vaccines but childhood shots as well. The New York Times for one labeled Mercola “The Most Influential Spreader of Coronavirus Misinformation” online.

So why this alliance? According to the OCA, it’s to “defend ourselves, farmers and consumers alike, from the increasing assault and collateral damage of ‘profit-at-any cost’ genetic engineering products and practices (foods and crops, ‘gain of function’ experiments, bioweapons, gene-therapy vaccines, geo-engineering, et al.) that are promoted by the global elite as ‘solutions’ to our problems.” Specifically, the targets are (in the words of the alliance):

  • “Fake meat and dairy” or “precision fermentation” and “plant-based protein,” which OCA says is “basically unregulated and unlabeled.”
  • Synthetic biology, “greenwashed and packaged as the solution to world hunger, rural poverty and the climate crisis.”
  • The “Great Reset,” of which synthetic biology is a part, is under attack as  a tool of anti-organic farming interests. The “great Resetters” include Bill Gates, Jeff Bezos, World Economic Forum, “Silicon Valley,” Bayer, Syngenta (et al.)  and “transnational food giants.”
  • Creating an alternative narrative to counteract the pro-science misinformers who OCA says is trying to “eliminate the animal agriculture and animal husbandry practices carried out by a billion small farmers and pastoralists across the globe.”

Alliance throws its support behind meat production rather than plant-based alternatives

Plant-based meats, a widely-embraced and fast-growing part of the food sector, is a major target because of two reasons: 1) Some of the products, such as the soy-based Impossible Burger, utilize genetically engineered components (e.g., yeast-derived heme); and 2) Shifting to a more plant-based diet, which many nutritionists recommend and which would reduce greenhouse gas emissions, would be disastrous. Who would benefit from this transition? According to Cummins, “megalomaniac Great Resetters, power drunk and flush with cash after COVID-19 … are ignoring the fact that the world’s 70 billion farm animals are essential to rural livelihoods, human nutrition, and the health of the soil (animal manure in compost and proper grazing).”

Cummins also blames what he calls “industrial agriculture” for “food related chronic diseases,” without stating what those are, let alone providing evidence in support of his broad brush claim. The World Health Organization does actually have a definition of chronic disease that are related to diet, as do other health associations. These include obesity, diabetes, heart disease and certain cancers. Most of these are ascribed to a sedentary lifestyle matched with eating too much fat, including meat fats, and over-processed sugar-containing foods. How boycotting a soy-based burger would address these health issues is unclear.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Cummins correctly notes that farm animals “properly raised and natured” are needed to regenerate carbon, nitrogen and water. However, the key point is how they are “properly raised and nurtured.” Proponents of genetically modified livestock (which is not the same thing as “fake meat”) look to alter traits to protect animal welfare, help decrease animal waste, reduce the acreage necessary to raise animals, or increase their meat yields (traditional stock raising methods do this, too).

The pro-meat alliance also does not address rising concerns about the amount of land that meat and dairy producing animals require. Land that does not have to be turned over to animal rearing can be preserved as wildland, or used for other crops — a key issue in sustainable agriculture that these activists ignore entirely.

What about climate change issues?

When it comes to threats from global warming, Cummins and his fellow travelers appear oblivious to the current science. He incorrectly claims “50% of greenhouse gas emissions come from industrial agriculture,” which to him means all non-organic farming. That’s factually incorrect. The actual percentage in the United States is 11% (globally, it is about 24%). In fact, no single source on the planet is responsible for more than 27% of greenhouse gas emissions, according to the US Environmental Protection Agency.

Much of what Cummins says to justify this coalition is boilerplate pro-organic propaganda — and is, ironically, harmful to the growing but still fragile organic industry.  Like many organic advocates, Cummins promotes the belief that organic farms are necessarily small while conventional farms are mammoth (or “industrial”, using ideological terminology). But according to 2020 USDA data, most farms in the US are on the medium to small side. In the US, the average farm runs about 444 acres, while the average organic farm is three-quarters as large, about 333 acres.

Ronnie Cummins. Credit: Global Earth Repair Foundation

Ironically, one of the fastest-growing segments of the farm economy in the developed world is so-called “Big Industrial Organic Farms” — which is not a bad thing, as there are in some circumstances a range of advantages in terms of both production and sustainability to large-scale farming. Two big corporate organic growers in California — Earthbound Farms and Grimmway Farms — dominate the U.S. market for organic produce. Earthbound grows 80 percent of the organic lettuce sold in the U.S. Big Industrial Organic food chain proponents argue that the scale of a farm has little bearing on its adherence to organic principles — which is probably true for the most part. Cummins’ broadside against ‘industrial farming’ is not only factually incorrect, it is an attack on the heart of organic farming in developed countries.

So, what is the hyped “Great Reset” that Cummins et al. fear so much?

According to this new initiative, Bill Gates and his like-minded lemmings are doing a lot more harm than just promoting “industrial agriculture” and genetic engineering of seeds. According to the World Economic Forum, the “Great Reset” is an economic plan to recover from the COVID-19 pandemic. But according to the new Cummins alliance, and echoing rightwing conspiracy postings popular on the Internet, a global elite is using the coronavirus pandemic to enforce radical social change, including forcing farmers to adopt genetically modified seeds.

That’s boilerplate conspiracy theory, where the technology-rejectionist far left joins with the conspiracy-minded far right. The Great Reset is actually far more benign and yet potentially of great benefit. As WEF outlines:

This initiative will offer insights to help inform all those determining the future state of global relations, the direction of national economies, the priorities of societies, the nature of business models and the management of a global commons. Drawing from the vision and vast expertise of the leaders engaged across the Forum’s communities, the Great Reset initiative has a set of dimensions to build a new social contract that honours the dignity of every human being.

According to WEF, some “reset” examples include Tanzania recycling plastic into face shields for healthcare workers, teachers in India trying to keep education going in the face of lockdowns, and switching to renewable energy. For agriculture, the “reset” does include shifting our diets slightly, away from meat-based food sources, citing the 15% of greenhouse gases that come from livestock globally, and problems from zoonotic diseases. It also looks for ways to eliminate plastics from the soil, and improving the performance and environmental record of aquaculture. None of these Great Reset initiatives are particularly controversial or radical — unless you are a right-wing or left-wing fringe conspiracist.

Who are the key players in this ideologically fringe but popular new alliance?

As for Regeneration International, it too is a new confederation of organic, agroecology and natural health advocates, formed in 2014, but growing significantly in recent years as the ‘regenerative agriculture’ movement has gone mainstream and global. Its steering committee members are the fringiest of the fringe,  including Cummins, Shiva, Hans Herren from Millennium Institute, Steven Rye of Mercola, and Audre Leu from Organics International (aka IFOAM). The group now includes 360 organizations in 70 countries.

ri social default

Will this alliance gambit work?

We are in tumultuous times in food and farming with many societal and economic shifts. The war in Ukraine, food price spikes and the resulting supply chain issues have put the organic movement back on its heels. Shiva’s failed Sir Lankan experiment — the now fallen government, suffused with organic fervor, banned many widely used (and effective and safe) crop chemicals almost overnight, crushing its farm economy and sparking a recession that could last for many years — has been sobering to agricultural policymakers. Thinking pragmatically about food production, “organic only” political movements in Europe, Africa and even the US are on the defensive.

Recent droughts and increased concerns about climate changes have also prompted many governments, even in anti-GMO Europe, to recalibrate the risks and rewards of classic genetic modification and new techniques such as CRISPR gene editing. In the European Union, scientists and even a few government officials have endorsed the safety and benefits of GM crops and food, although the EU Parliament and member government rules remain strictly anti-GM. Climate change concerns and sustainability trade-offs are expanding the conversation about what agricultural practices make the most sense in fast-changing world. African countries, including Kenya, which recently approved GM crops after a 10-year ban, and India, which just approved its first GM crops in 20 years, are among many countries now looking at further loosening regulations to permit growing and import of genetically modified crops.

Considering the impact of the recent economic upheavals, this conspiracy-soaked organization seems out-of-step with current economic and sustainability trends; it smacks of desperation not innovation. Why is happening now? It may be a defensive move to prevent further erosion in the organic-only movement. Although it’s not yet clear how the last few years of COVID and war have impacted the finances of anti-biotechnology, organic promoting campaigners, according to US Internal Revenue Service tax reports, many of these groups have shown steep declines in fundraising over the past few years. The Organic Consumers Association, for example, showed revenues of $2.8 million for fiscal 2020 (the most recent data available to the public), scare-mongering on food and agriculture may have hit a wall. OCA is down from more than $4 million as recently as 2016, when the anti-GMO movement was more influential). Regeneration International, showed revenues of just $197,800 in 2020 (and a negative income stream), down more than half, from $407,000 the previous year.

It’s not clear that all innovation rejectionists are in retreat. Mercola.com, which operates on the conspiracy fringes and doesn’t report on the IRS 990 (the standard tax disclosure form for non-profits), may even have leveraged the ‘disinformation economy’ to improve their finances But there is no doubt this is a time of ferment, and influential conspiracy-promoting groups are rejiggering to survive. That means we can expect and even louder attack on genetic innovation in agriculture in food in the coming months.

As in the case of the failed experiment in Sir Lanka, this coalition appears misguided and ill-timed. With food prices spiraling upward and climate change concerns growing, the goals of the alliance come across as radical, out of step with mainstream science, and potentially destabilizing.  Yet the views of the Organic Consumers Association, Shiva, Mercola and the like are increasingly mainstream among many organic proponents. This initiative is just the latest example of the extreme views of many organic supporters — not only out of step with mainstream science, particularly on climate change, but aligned with disinformation proponents. Journalists, the public and policymakers around the world are noticing.

Andrew Porterfield is a writer and editor and agriculture editor of the Genetic Literacy Project. He has worked with numerous academic institutions, companies and non-profits in the life sciences. BIO. Follow him on Twitter @AMPorterfield

This article first appeared on the GLP October 25, 2022.

Viewpoint: Dissecting anti-science smears — Center for Media and Democracy spreads disinformation about food and science communication

At a time when democracy is threatened by a number of sources and media is a potent remedy or problem, the Madison WI based Center for Media and Democracy could be especially relevant. Their cause seems reasonable, and as an independent academic scientist, consumer and American I applaud some of their efforts.

Sadly, they have targeted me and other scientists for harassment. They have posted pages that use omission, innuendo and inference to portray scientists they wish to defame in a false, negative light.

Their website about me does not mention what I actually do, awards won for teaching/research/outreach excellence, pursuit of opportunities of under-represented students, and my efforts to coach and promote faculty career progression.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Look at the manipulation- the omission, the twisting, the extrapolation. This is what Center for Media and Democracy does to target a scientist they don’t want teaching inconvenient science.

I have contacted CMD with kind requests to amend the information for several years. It is something I do now and then. No response from them.

I only posted this again because CMD’s web page is being used by anti-vaccine, anti-5G, anti-GMO trolls to attempt to remove me from important conversations.

Take a look below. Here are CMD’s wild fabrications and silly extrapolations that time has proven to be false. Let’s look at this point-by-point.

Why would anyone trust them? Why would anyone donate to support this?

foltafoltafolta

foltafolta

Kevin Folta is a professor, communications consultant and speaker. He hosts the Talking Biotech and GLP’s Science Facts and Fallacies podcasts. Views are presented independent from his roles at the University of Florida. @Kevin Folta

A version of this article was originally posted at Kevin Folta’s blog Illumination 2.0 and is reposted here with permission.

It’s not going away: We need a real, non-political debate about the best way to live with COVID or countries will fracture

coronavirus lockdown who reopen economy wide a b a b f fc b bf a dafbbf

In 1968, at the height of the last great influenza pandemic, at least a million people worldwide died, including 100,000 Americans. That year A.M.M. Payne, a professor of epidemiology at Yale University, wrote:

In the conquest of Mount Everest anything less than 100% success is failure, but in most communicable diseases we are not faced with the attainment of such absolute goals, but rather with trying to reduce the problem to tolerable levels, as quickly as possible, within the limits of available resources…

That message is worth repeating because the schism between those seeking “absolute goals” versus those seeking “tolerable levels” is very much evident in the current pandemic. On September 21, the BMJ reported that opinion among UK scientists is divided as to whether it is better to focus on protecting those most at risk of severe COVID, or imposing lockdown for all.

One group of 40 scientists wrote a letter to the chief medical officers of the UK suggesting that they should aim to “suppress the virus across the entire population”.

In another letter, a group of 28 scientists suggested that “the large variation in risk by age and health status suggests that the harm caused by uniform policies (that apply to all persons) will outweigh the benefits”. Instead, they called for a “targeted and evidence-based approach to the COVID-19 policy response”.

A week later, science writer Stephen Buranyi wrote a piece for the Guardian arguing that the positions in the letter with 28 authors represent those of a small minority of scientists. “The overwhelming scientific consensus still lies with a general lockdown,” he claimed.

A few days later, over 60 doctors wrote another letter saying: “We are concerned due to mounting data and real world experience, that the one-track response threatens more lives and livelihoods than Covid-lives saved.”

This back and forth will undoubtedly continue for some time yet, although those involved will hopefully begin to see opposing scientific views and opinions as a gift and an opportunity to be sceptical and learn, rather than as a “rival camp”.

Scientific consensus takes time

There are issues, such as global warming, where there is scientific consensus. But consensuses take decades, and COVID-19 is a new disease. Uncontrolled experiments in lockdown are still ongoing, and the long-term costs and benefits are not yet known. I very much doubt that most scientists in the UK have a settled view on whether pub gardens or universities campuses should be closed or not. People I talk to have a range of opinions: from those who accept that the disease is now endemic, to those who wonder if it can still be eradicated.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Some suggest that any epidemiologist who does not toe a particular line is suspect, or has not done enough modelling and that their views should not carry much weight. They go on to dismiss the views of other scientists and non-scientist academics as irrelevant. But science is not a dogma, and views often need to be modified in the light of increasing knowledge and experience. I am a geographer, so I am used to seeing such games of academic hierarchy played above me, but I do worry when people resort to insulting their colleagues rather than admit that knowledge and circumstance have changed and reappraisal is necessary.

A grim calculus

Is the cure worse than the disease? This is the question that currently divides us, so it is worth considering how it might be answered. We would have to know how many people would die of other causes, for example, of suicide (including child suicides) that would not have otherwise occurred, or liver disease from the increase in alcohol consumption, from cancers that were not diagnosed or treated, to determine the point at which particular policies were taking more lives than they were saving. And then what value should you put on those lost or damaged lives against the economic consequences?

We do not live in a perfect world with perfect data. For children, for whom the risk of death from COVID is almost zero and the risks of long-term effects are thought to be very low, it is easier to weigh up the negative effects of not going to school or of being trapped in households with rising domestic abuse.

For university students, who are mostly young, a similar set of calculations could be made, including estimating the “cost” of having the infection now, versus the cost of having it later, possibly when the student is with their older relatives at Christmas. With older people, though, the calculus – even in a perfect world – would become increasingly complex. When you are very old and have very little time left, what risks would you be willing to take? One elderly man famously claimed: “No pleasure is worth giving up for the sake of two more years in a geriatric home in Weston-super-Mare.”

file emgjr
Safety, but at what cost? Credit: Solarisys/Shutterstock

A recent paper, published in Nature, suggests that even in Hong Kong, where compliance with mask-wearing has been over 98% since February, local elimination of COVID is not possible. If it is not possible there, it may not be possible anywhere.

On the brighter side, elsewhere, elderly people have been protected even when transmission rates are high and overall resources are low. In India, a recent study found that “it is plausible that stringent stay-at-home orders for older Indian adults, coupled with delivery of essentials through social welfare programs and regular community health worker interactions, contributed to lower exposure to infection within this age group in Tamil Nadu and Andhra Pradesh.”

However, minimising mortality is not the only goal. For those who don’t die, the outcome can still be prolonged and severe debility. That, too, must be taken into account. But unless you are sure that a particular measure for locking down will do more good than harm, in the round, you should not do it. In 1970, shortly before he became dean of the London School of Hygiene and Tropical Medicine, C.E. Gordon Smith wrote:

The essential prerequisite of all good public health measures is that careful estimates should be made of their advantages and disadvantages, for both the individual and the community, and that they should be implemented only when there is a significant balance of advantage. In general, this ethic has been a sound basis for decision in most past situations in the developed world although, as we contemplate the control of milder diseases, quite different considerations such as the convenience or productivity of industry are being brought into these assessments.

Current beliefs of where the balance of advantages and disadvantages lie are changing. The “rival camps” rhetoric needs to end. No individual or small group represents the view of the majority.

Danny Dorling is the Halford Mackinder Professor of Geography at the University of Oxford. Find Danny on Twitter @dannydorling

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

This article first appeared on the GLP on October 12, 2020.

Do our brains predict how we think? Psychedelics may help answer that question

“Everything became imbued with a sense of vitality and life and vividness. If I picked up a pebble from the beach, it would move. It would glisten and gleam and sparkle and be absolutely captivating,” says neuroscientist Anil Seth. “Somebody looking at me would see me staring at a stone for hours.”

Or what seemed like hours to Seth. A researcher at the UK’s University of Sussex, he studies how the brain helps us perceive the world within and without, and is intrigued by what psychedelics such as LSD can tell us about how the brain creates these perceptions. So a few years ago, he decided to try some, in controlled doses and with trusted people by his side. He had a notebook to keep track of his experiences. “I didn’t write very much in the notebook,” he says, laughing.

Instead, while on LSD, he reveled in a sense of well-being and marveled at the “fluidity of time and space.” He found himself staring at clouds and seeing them change into faces of people he was thinking of. If his attention drifted, the clouds morphed into animals.

Seth went on to try ayahuasca, a hallucinogenic brew made from a shrub and a vine native to South America and often used in shamanistic rituals there. This time, he had a more emotional trip that dredged up powerful memories. Both experiences strengthened Seth’s conviction that psychedelics have great potential for teaching us about the inner workings of the brain that give rise to our perceptions.

He’s not alone. Armed with fMRI scans, EEG recordings, computational models of the brain and reports from volunteers tripping on psychedelics, a small but growing number of neuroscientists are trying to take advantage of these drugs and the hallucinations they induce to better understand how the brain produces perceptions. The connections are still blurry, but the studies are beginning to provide new support for a provocative, more-than-acentury-old hypothesis: that one of the fundamental functions of the brain — and the root of everything we perceive — is to make best guesses about the causes of information impinging on our senses at any given moment. Proponents of this idea have argued that these powers of prediction enable the brain to find meaning amid noisy and ambiguous sensory information, a crucial function that helps us make sense of and navigate the world around us.

When these predictions go haywire, as they seem to under psychedelics, the perceptual aberrations provide neuroscientists with a way to probe the workings of the brain — and potentially understand what goes wrong in neuropsychological conditions, such as psychosis, that cause altered perceptions of reality.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

The predictive brain

The idea that the brain is, in essence, a prediction machine traces its modern roots to the 19th century German physicist and physician Hermann von Helmholtz. He noted that our brains have to make inferences about the possible causes of the signals we receive via our senses. He pointed in particular to our ability to perceive different things given the same sensory information (a good example of this would be the famous optical illusion that can appear either as the silhouette of two people facing each other or as the contours of a vase). Given that the sensory input isn’t changing, Helmholtz argued that what we perceive must be based on the brain’s prediction of what’s there, based on prior knowledge.

Over the past century, these ideas have continued to intrigue philosophers, neuroscientists, computer scientists and others. The modern version of the theory is called predictive processing. In this view of perception, the brain is not a passive organ that simply collates information from the senses. Rather, it’s an active coconspirator. It’s constantly predicting the causes of incoming information, whether from the world outside or from within the body. In this view of perception, “the brain is actively … creating hypotheses that are the best explanation for the sensory samples that it’s receiving,” says computational neuroscientist Karl Friston of University College London. These predictions lead to perceptions, which can remain unconscious or enter conscious awareness.

The brain is constantly trying to predict the causes of sensory inputs, and these predictions lead to perceptions. When the sensory inputs are ambiguous, the predictions can keep changing. In this case, your brain may predict that you’re seeing a vase — or two people facing each other. Credit: John Smithson

In a landmark 1999 paper that established predictive processing as a leading hypothesis of brain function, two computer scientists, Rajesh Rao and Dana Ballard (now at the University of Washington in Seattle and the University of Texas at Austin, respectively) developed a detailed model of predictive processing — specifically, addressing regions of the brain involved in recognizing objects and faces. Those regions comprise levels of a pathway that begins in the retina, moves on to the lateral geniculate nucleus of the thalamus and then to higher and higher levels of the cerebral cortex, named V1, V2, V4, IT and onward.

In Rao and Ballard’s model, each brain area that constitutes a level in such a hierarchy makes predictions about the activity of the level below: V2, for example, predicts the neural activity it expects of V1, and sends a signal down to V1 indicating this prediction. Any discrepancy between the prediction and the actual activity in V1 generates an error signal that moves up from V1 to V2, so that V2 can update its expectations of V1. So predictions flow down, from higher to lower layers, and errors move up, from lower to higher layers. In this way of thinking, the lowest layer — the one closest to the retina — makes predictions about the incoming sensory information, and the highest layers — IT and above — hypothesize about more complex features like objects and faces. Such predictions, continually updated as we move around, are what we perceive.

In the years since Rao and Ballard’s paper, neuroscientists have begun to find experimental evidence that supports such computational models. For example, the theory predicts that sensory stimuli that are expected or unsurprising should generate less neural activity in lower levels of the hierarchy (because they generate fewer error signals). And fMRI scans of neural activity in the lower layers of the visual cortex in people looking at computer-generated images bear this out.

But predictive processing can go wrong, posit behavioral and clinical neuroscientist Paul Fletcher and his student Juliet Griffin of the University of Cambridge in the UK — and when that happens, we may perceive things that aren’t real, be they aberrations of sight, sound or other senses. It’s an idea that piques the interest of those who study conditions such as schizophrenia, which is often accompanied by psychosis. “If predictive processing helps us to understand how the mind connects to external reality, I think it follows that it is a useful way of understanding situations in which the mind seems disconnected from reality,” says Fletcher. Indeed, Fletcher notes, such disconnection is essentially the definition of psychosis. (Griffin and Fletcher explored the potential connection between predictive processing and psychosis in the 2017 Annual Review of Clinical Psychology.)

Prior expectations

An important aspect of predictive processing is that each hypothesis generated by a level in the hierarchy is associated with a notion of confidence in the hypothesis, which in turn is based on prior expectations. The higher the confidence, the more a given level ignores error signals from the level below. The lower the confidence, the more a given level listens to upward-moving error signals. Could psychedelics be altering our perception of reality by messing with this process? Friston and Robin Carhart-Harris, a psychologist and neuroscientist at Imperial College London, think so. In 2019, they put forward a model called REBUS, for “relaxed beliefs under psychedelics.” According to their model, psychedelics reduce the brain’s reliance on prior beliefs about the world. “We feel them with less confidence,” says Carhart-Harris. “They are less reliable under psychedelics.”

If that’s what psychedelics do, one result could be an increase in cognitive flexibility. Conversely, blocking the receptors in the brain that are activated by psychedelics might do the opposite — make beliefs more rigid.

Neuroscientists often think of the brain as organized into hierarchical levels. The concept of predictive processing holds that each level makes predictions about the activity of the level below. These predictions flow down the hierarchy, and lower levels generate an error signal that indicates the difference between the predicted and actual sensory inputs. These error signals flow upward, and higher levels use them to refine their predictions. Predictions at the highest level help to create perceptions.

Some evidence for this comes from experiments with rats in which researchers gave the animals a drug that blocks the main type of receptor on the surface of neurons that responds to LSD and other classic psychedelic drugs. These receptors, called 5-HT2A serotonin receptors, are densely distributed in the regions of the cortex responsible for learning and cognition. Blocking 5-HT2A receptors, it turns out, makes rats cognitively inflexible: They are no longer able to spontaneously change from one behavior to another in order to get a reward. In the context of predictive processing, the finding suggests that the 5-HT2A-blocker made the rats’ brains more tightly constrained by prior beliefs about the world.

Conversely, when psychedelics bind to 5-HT2A receptors, they seem to make the brain less reliant on prior expectations and more reliant on actual sensory information. This could account for the vivid perceptual experiences they cause. According to the predictive processing model, a brain on psychedelics gives more weight to information entering the lower layers, which deal with concrete visual features — say, the shape and color of a flower. Constraints imposed by abstract beliefs and expectations about the flower are relaxed. “All of these higher-level constructs have been dissolved,” says Friston. “It can be a very pleasurable experience.”

If psychedelics mess with prior beliefs, that might also explain why they cause one to hallucinate a reality that’s untethered from real-world expectations. Take, for example, Seth’s experience of seeing clouds morphing into familiar faces. According to Friston, the brain’s visual system has strong prior beliefs — for instance, that clouds are up in the sky. Another prior belief would be that there are no faces up there. Normally, this would make it nearly impossible to perceive, say, Lucy in the sky (with or without diamonds). But as psychedelics take hold, higher levels of the predictive processing hierarchy begin to make otherwise untenable predictions about the world outside. These predictions become perceptions. We start hallucinating.

Of course, psychedelic hallucinations are not only visual. They can involve all types of altered perceptions. In 2017, for example, neuropsychologist Katrin Preller of the University Hospital for Psychiatry Zurich in Switzerland and colleagues found that people listening to music that they normally considered meaningless or neutral felt heightened emotions and attributed an increased sense of meaningfulness to the music while on LSD.

Friston argues that these altered perceptions extend even to our sense of self, which in the predictive processing framework is based on the brain’s internal models of all aspects of our own being. Psychedelics would, again, loosen the hold of these internal models. “You now lose a precise sense of self,” says Friston. Indeed, a survey by Carhart-Harris and colleagues suggests that a breakdown of the boundaries of the self could be one explanation for why some people on psychedelics report mystical feelings of a sense of unity with their surroundings.

Disrupted connections

If psychedelics do act on the brain to change predictive processing, it’s not clear how they do it. But in recent studies, researchers have found ways to approach these questions. One way to gauge changes occurring in brains on psychedelics is to measure something called Lempel-Ziv complexity, a tally of the number of distinct patterns that are present in, say, recordings of brain activity over the course of milliseconds using a method called magnetoencephalography (MEG). “The higher the Lempel-Ziv complexity, the more disordered over time your signal is,” says Seth.

The most famous psychedelic compound, LSD (short for lysergic acid diethylamide), was synthesized in 1938 by Swiss chemist Albert Hofmann. In 1943, after apparently absorbing some LSD through his fingertips by accident, Hofmann experienced hallucinations and had to rush home from work. Hofmann later continued his experiments and went on to document in detail the psychedelic effects of LSD. Credit: Novartis International

To determine the degree of disorder of human brains on psychedelics, Seth’s team, in collaboration with Carhart-Harris, looked at MEG data collected by researchers at the Cardiff University Brain Research Imaging Centre in Wales. The volunteers were given either LSD or psilocybin, the hallucinogenic ingredient in “magic mushrooms.” On psychedelics, their brain activity was more disordered than it was during normal waking consciousness, according to an analysis of the MEG signals that was published in 2017. Seth says that while the increase in disordered brain signals does not definitively explain people’s psychedelic experiences, it’s suggestive. “There’s a lot of mind-wandering and vagueness going on,” says Seth. “The experience is getting more disordered and the brain dynamics are getting more disordered.” But he says there’s more work to do to establish a clear connection between the two.

More recently, Seth, Carhart-Harris and colleagues took another look at the brain on psychedelics, using a statistical metric called Granger causality. This is an indication of information flow between different regions of the brain, or what neuroscientists call functional connectivity. For example, if activity in brain region A predicts activity in brain region B better than the past activity of B itself does, the Granger causality metric suggests that region A has a strong functional connection to region B and drives its activity. Again, using MEG recordings from volunteers on psychedelics, the team found that psychedelics decreased the brain’s overall functional connectivity.

One possible interpretation of these Granger and Lempel-Ziv findings is that the loss of functional organization and increase in disorder is disrupting predictive processing, says Seth. Verifying that would involve building computational models that show exactly how measures of Granger causality or Lempel-Ziv complexity change when predictive processing breaks down, and then testing to see if that’s what happens in the brains of people on psychedelics.

A woody vine that grows in the Amazon and Orinoco river basins in South America, Banisteriopsis caapi is used alone or with other plants to make ayahuasca tea — a drink with psychotropic effects long a part of rituals and shamanic healing practices. Studies have shown B. caapi to have antidepressant effects. Credit: E. W. Smith/Hallucinogenic Plants (A Golden Guide)

In the meantime, evidence that psychedelics mess with functional connectivity is mounting. In a randomized, double-blind study published in 2018, Preller and colleagues gave 24 healthy people either a placebo, LSD or LSD along with a 5-HT2A blocker that impeded the drug’s effects. The subjects were then scanned inside an MRI machine, allowing researchers to measure activity in different brain regions and assess their connectedness.

Those people on LSD alone showed widespread changes: Their brains showed an increased connectivity between lower-order brain regions responsible for processing sensory input, but decreased connectivity between brain regions that are involved in the conceptual interpretation of sensory inputs. Preller thinks this might explain the heightened sensory experiences caused by LSD. Indeed, the team also found corroborating evidence, using data from the Allen Human Brain Atlas, a detailed map of gene activity: Areas of the brain that produce the 5-HT2A receptor overlapped with the regions of altered connectivity, suggesting that LSD affects these brain regions the most.

Then Preller and colleagues did a more targeted study, using fMRI data to look for changes in functional connectivity between the thalamus and the cortex. The thalamus sits in the center of the brain and processes information from the senses before sending relevant signals up to the cortex. But information also flows in the other direction. In the predictive processing model, signals going down from the cortex to the thalamus would represent predictions, and signals flowing up to the cortex would represent errors. Researchers have long hypothesized that psychedelics may cause the thalamus to function less effectively, says Preller. This may be happening on LSD: Her study, published in 2019, showed that the flow of information from the thalamus up to certain cortical areas increased in people on LSD and the flow going in the opposite direction decreased.

These fMRI brain scans show how LSD alters communication among regions of the human brain (shown here from different angles in the four panels). Red and orange regions show stronger functional connectivity under LSD, and blue regions show reduced connectivity.

Altered brain waves

Additional hints of how psychedelics could interfere with predictive processing have come from an entirely different way of looking at brain function. In 2019, Carhart-Harris got intrigued by a paper he read about potential brain signatures of predictive coding (which is how researchers refer to the way predictive processing may be realized in the brain). He saw a way to test the hypothesis that psychedelics are messing with the brain’s prior beliefs.

The paper by computational neuroscientist Andrea Alamia of Centre de Recherche Cerveau et Cognition, CNRS, in Toulouse, France, and a colleague, involved a simplified model of predictive coding. Each level represents a population of neurons — in, say, the LGN and V1 layers of the visual system. The input to the model is a random sequence of numbers, where each number represents the intensity of a light signal. The model has two key parameters. One is the time it takes for a prediction or error to travel from level to level. The other is the time it takes for a population of neurons to return to their baseline activity after the input has waned. The team found that as the model tries to predict the intensity of the input and each level tries to update itself based on the error signal it gets when its predictions are wrong, the model produces waves of signals with a frequency of about 10 hertz, which researchers call alpha waves. These waves ripple up and down through the levels of the model.

In the presence of inputs, the alpha waves travel up from the lower to the higher levels in the hierarchy. In the absence of inputs, the waves travel down. “These are all computational results,” says Alamia. So, to check their model against actual data, the team looked at EEG recordings of electrical brain activity previously collected in volunteers who had been asked to either gauge the intensity of light signals (presence of input) or close their eyes (absence of input). The team found traveling alpha waves that move up in the presence of inputs and down when the eyes were closed — exactly as in their model of predictive coding.

Carhart-Harris found the results exciting. “I wrote the French team, and I said, ‘Look, we’ve got psychedelic data, and I have a very clear hypothesis for you,’” he says. The hypothesis was simple. Start with EEG recordings of people with their eyes closed, without psychedelics. The alpha waves should be traveling down. Then, inject them with a psychedelic. They’ll start having visual hallucinations and the alpha waves should switch direction. That’s because the psychedelic, according to the REBUS model, should cause information to flow from lower to higher levels, even with eyes closed and in the absence of real sensory input. “There’s just a drug on board that makes you see all these crazy visions,” says Carhart-Harris.

As it happens, his team had EEG recordings from an earlier study of volunteers who had been injected with DMT, the main active component of ayahuasca. They sent these recordings to the French team, who analyzed them, and found the data to be remarkably consistent with the hypothesis. “The drug goes in … and the waves just shift,” says Carhart-Harris.

“I think all this stuff is brilliant,” says Seth. But he says that such research, including his, will have to get nuanced to say something more definitive. Psychedelics have a powerful and widespread effect on the brain, he notes: “It’s very hard to find something that doesn’t change under psychedelics,” he says. “One of the concerns I have at the moment is that basically everyone — and this is the pot calling the kettle black, I’m guilty of this too — takes their favorite analysis and throws it at psychedelics and says, ‘Look, it changed.’ OK — but everything changes.”

Seth thinks a way forward is to experiment with microdosing: giving minuscule quantities of psychedelics to subjects so that they remain cognitively capable of doing specifically designed tasks rather than simply hallucinating inside scanners. “They’re not seeing unicorns promenading around the sky, but you’re still activating the same system,” says Seth. People given microdoses of LSD could be tested on how quickly they detect visual stimuli, for example, as researchers induce shifts in attention, and their reaction times could be compared against computational models of predictive processing.

Such ideas, which are only just emerging, might one day allow researchers to determine if predictive processing is indeed the right model for how the brain creates perceptions.

It’s a trippy thought.

Anil Ananthaswamy is a science journalist who enjoys writing about cosmology, consciousness and climate change. He was a 2019-20 MIT Knight Science Journalism fellow. His latest book is Through Two Doors at Once. Find Anil on Twitter @anilananth

A version of this article was posted at Knowable, and has been reposted here with permission. Sign up for their newsletter here. Knowable magazine can be found on Twitter @Knowablemag

Viewpoint: What’s the future of individualized medicine when you factor in ‘racial’ and genetic differences?

COVID-19 has put race-based health disparities on full display, but such inequities extend far deeper than the current pandemic. An enduring challenge for physicians and scientific researchers has been to distinguish health differences that result from genetic predispositions from those that arise due to environmental or social influences.

In a new commentary in the Journal of Clinical Investigation, experts from the University of Pennsylvania and Case Western Reserve University provide a historical context and contemporary examples of what they call the “quagmire” surrounding race and genetic ancestry when it comes to identifying—and preventing—health disparities.

[Editor’s note: See an excerpted version of the commentary here: Study — Quagmire surrounds the use and and misuse of race in identifying and preventing health disparities.]

Giorgio Sirugo, a senior research investigator in the Perelman School of Medicine; Sarah Tishkoff, a Penn Integrates Knowledge Professor in the Perelman School of Medicine and the School of Arts & Sciences; and Scott Williams, a professor in the Case Western Reserve University School of Medicine all have a deep background in the study of human genetics. Their research has overturned the notion that race has a biological basis, yet they say race continues to be misused as a proxy for genetic ancestry and ethnicity when it comes to medical diagnosis, treatment, and outcomes—often with harmful consequences.

Giorgio Sirugo, Sarah Tishkoff, and Scott Williams. Credit: University of Pennsylvania

Penn Today spoke with the authors about their article, including their thoughts on the misuse of race in medicine and how new investigations into human diversity could improve and personalize health care.

What compelled you to write this commentary now?

Williams: We’re sitting at a time where there’s an increasing awareness of health disparities, across the U.S., in particular. With COVID-19 differentially affecting populations, I think we’re sensitized to the features of disparities and what’s causing them. This discussion, which has come to a head over the last year, has a lot of political baggage. We think that the science has to come prior to the policy and prior to the implementation of practice.

Tishkoff: We very intentionally published this commentary in the Journal of Clinical Investigations to reach clinicians. It’s important for clinicians to understand the question of whether or not race is important for addressing clinical questions, diagnostics, or predictions of risk. When should it be used, when should it not be used, how is it useful for biomedical research, what can we use to replace race? Our research has shown that the so-called races, as defined in the United States, don’t correlate clearly with patterns of genetic diversity.

You note that race as a classifier for humans is based on historical beliefs that have since been overturned and doesn’t align with what you and others are finding out about actual human diversity. Can you elaborate on that mismatch?

Sirugo: In the commentary we explained very briefly the historical basis of some beliefs about why certain populations considered themselves superior and others inferior, and why these don’t have a basis in science. The adjective Caucasian, for example, was introduced into the scientific literature in the 18th century and has persisted to this day, when it is in fact a cultural fossil that is still used, without justification, in lieu of “European ancestry.” In medical practice in Italy, we never use the word the word race, ‘razza,” because of connections to the fascist regime that specifically targeted the Jewish population through discriminatory legislation, “Leggi Razziali’ i.e. “Racial Laws,” and ended with the murder of thousands of Italian Jews, sent to the gas chambers of Auschwitz and other death camps.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Tishkoff: Historically racial classifications have been based on superficial things, like skin color and morphological characteristics, but have also been intertwined with cultural classifications and with a hierarchical type of thinking. In biomedicine, it depends on the particular question that you’re addressing, but it’s typically much better to talk about where people come from, what’s their genetic ancestry.

It’s also clear from the genetic data that there are no clear boundaries in terms of genetic ancestry that correlate with what we call races in the U.S. So-called Hispanic races have differing amounts of African, European, and Native American ancestry. And in people who self-identify as African American, genetic studies have shown that African ancestry is 80%, on average, but can range from less than 10% to 100%. Ultimately, we really need to have more personalized precision medicine that’s looking at individual variation and avoid racial profiling in medicine.

Williams: The definitions of race actually vary in different parts of the world, but in the U.S. we talk about African Americans, European Americans, or Hispanics as if they are homogenous groups, and they are completely non-homogeneous groups. As we’ve studied for years, there is so much diversity in Africa that to give it one nominal description is completely ridiculous in terms of genetics, in terms of ethnicity, and in terms of culture. It does a disservice on so many levels.

What are some ways that race is currently misused in medicine?

Sirugo: In childbirth planning for more than a decade, OB-GYN doctors have used a clinical decision calculator to predict the likelihood of a successful vaginal delivery after a prior cesarean delivery. The original calculator included the age, height, weight, and race of the patient. For a subject of African ancestry, it would have determined a C-section to be the most appropriate choice, and so many African American women were subjected to a surgery that was unnecessary. Race as a risk factor was eventually removed this year, and a new version of the calculator was released.

Credit: Damon Dahlen/Huffington Post

Williams: In the article we discuss the use of rosuvastatins, a drug to lower ‘bad’ cholesterol. Asians, on average, have higher circulating drug levels, but it turns out that’s due to two genes that differ in frequency between people with European ancestry and Asian ancestry. So, clinicians tend not to give that drug to Asians or Asian Americans when some could benefit from it, if we knew what their genotype was at those two genes.

Tishkoff: Another example would be cystic fibrosis, which is thought of as a disease that is only found in people of European ancestry. And again you have to be careful to avoid racially profiling someone who has mixed ancestry. The majority of an individual’s genome may be of non-European ancestry, but the genetic region that carries the mutation causing cystic fibrosis could have 100% European ancestry. So that diagnosis could be missed or delayed.

Sirugo: Yes, there’s a kind of misleading effect orienting the clinical reasoning that could bring you to the wrong conclusion.

With medical conditions that are more prevalent in people with particular ethnicities, how much do we know about whether these differences are attributable to genetics or to socioeconomic, environmental, or cultural factors?

Sirugo: One good example is that the Pima of Arizona have the highest prevalence of diabetes in the world. Over 50% of the population has it. As we stated in the paper, Pima from Mexico have a prevalence under 10%. And so you have exactly the same ethnic group with the prevalence of a given disease, which likely has a genetic component, changing dramatically depending on which environment they’re living in.

The Young River’s People Council, composed of 14 to 24-year-old Pima-Maricopa community members taking a proactive role in government and community leadership. Credit: Young River People’s Council

Williams: The traditional way of phrasing it is ‘nature or nurture.” I think its nature and nurture. There are people who have predispositions based on their individual genetic constitution where the risk is variable based on the environmental exposure. Another example we give in the paper relates to preterm birth. It’s about 50% more prevalent in African Americans than in European Americans. It’s less common in Canadians of African descent, by about 30%. But if you take first-generation immigrants from Africa, they don’t have an increased risk of preterm birth, but the second generation does. Clearly this is a response to new environmental exposures that change the risk of disease, as genetics will not change in a single generation. And even if there are genes that affect preterm birth risk, which there likely are, they are not predominant in this scenario.

One thing we don’t address explicitly in the paper is there’s a difference between the risk of acquiring a disease and the outcomes and severity of that disease. There may be biological factors that govern that, but oftentimes it’s access to health care that determines whether or not you get good treatment. Just look at the disparities observed for COVID-19 as an example.

How can health care providers address health disparities in a way that acknowledges differences without making assumptions based on what they perceive as a patient’s race or ethnicity?

Sirugo: There’s a need to bring up a generation of physicians who are a lot more aware of some of the points we have been discussing so far: the effect of genetics and the amount of variation among populations. Even in this we are biased, as a lot of research has been done in European populations, a lot less in other populations. This is part of an educational process that will take some time to correct.

Tishkoff: It seems to me that knowledge of ancestry in a patient is important and can play a role in diagnosis and treatment of disease; however, a physician has to be aware that there are no firm boundaries in terms of the genetic variation observed in groups around the world. Ultimately, we have to treat the individual.

Williams: If you’re trained as a genetics student, you will think about genetics as being important, and if you’re a social epidemiologist you will think about social factors being important. We have to be very careful about walking that line and trying to be inclusive in our intellectual approaches to how we study these things.

What do you see as the frontier in terms of individualized medicine when it comes to considering ancestry? Is it sequencing everyone’s genome?

Tishkoff: What if we had the ability to obtain a patient’s genome for $1,000, would that be the solution? I say no. Even if we had the genome of every person on this planet, until we figure out what the functional variation is, it’s not going to do any good. That also ties into the problem that Europeans are overrepresented in human genetics research. As we showed in a prior paper, about 80% of individuals included in genome-wide studies of disease risk are of European ancestry. Only around 2% are of African ancestry and 1% of Hispanic ancestry. That’s really problematic. That’s exactly the reason that we started our Center for Global Genomics and Health Equity, to address these issues.

Williams: We also have to understand there are situations in which genetics has almost no role. Take lung cancer, which, in the early part of the 20th century was an exceedingly rare disease. In the U.S. it’s not anymore and it’s almost all due to smoking.

Sirugo: Or let’s take the example of infectious diseases. We know that there is a genetic susceptibility and an environmental susceptibility. Early exposure to certain infectious agents can also increase the risk of severe outcomes of later exposure to other diseases. It’s a very complicated area, and a lot more research needs to be done.

Williams: Everybody in the U.S. talks about precision medicine, what used to be called personalized medicine. One of the reasons we do what we do is because we want to prevent disease and improve health. We want to get our knowledge of genetics to the point where there’s enough biological understanding that we can mitigate disease risks. But we have a long way to go.

Giorgio Sirugo is a clinician scientist and senior research investigator in the University of Pennsylvania Perelman School of Medicine.

Sarah Tishkoff is the David and Lyn Silfen University Professor in the Department of Genetics in the Perelman School of Medicine and the Department of Biology in the School of Arts & Sciences at the University of Pennsylvania and director of the Penn Center for Global Genomics & Health Equity. Find Sarah on Twitter @SarahTishkoff

Scott Williams is a professor in the Department of Population and Quantitative Health Sciences and the Department of Genetics and Genome Sciences in the Case Western Reserve University School of Medicine. Find Scott on Twitter @scottmwilliams9

A version of this article was originally posted at Penn Today and has been reposted here with permission. Find Penn Today on Twitter @penn_today

GLP Podcast: Cell-based breast milk; Joe Mercola’s awful COVID book; More mRNA drugs coming soon?

Cell-based breast milk might offer mothers and babies a better alternative to expensive formula in the coming years. Alternative health champion Joseph Mercola has turned the COVID-19 pandemic into a lucrative, science-free marketing opportunity. Pharmaceutical companies used groundbreaking mRNA technology to develop coronavirus vaccines. Scientists are using the same platform to develop other important therapies, including cancer and flu vaccines.

Join geneticist Kevin Folta and GLP contributor Cameron English on this episode of Science Facts and Fallacies as they break down these latest news stories:

Recent genetics research has shown that some mothers aren’t able to breast freed, or at least can’t produce enough milk to sustain their newborns. Using technology similar to that used to make lab-grown meat, a biotech startup is developing cell-based breast milk for individual women. In the coming years, the technology could offer mothers and babies a potentially better alternative to formula.

Dr. Joe Mercola, champion of all things alternative health and vocal vaccine critic, has been very active over the course of the COVID-19 pandemic. Alongside Organic Consumers’ Association founder Ronnie Cummins, the physician-turned-supplement salesman has authored a book alleging that SARS-COV-2 is an engineered bioweapon the elites have used to expand their global influence. What’s wrong this story? Almost everything. The bigger problem, however, is that figures like Mercola distract people from real problems that need to be addressed before the next pandemic comes along.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

The first COVID vaccines were developed in a matter of weeks, thanks to mRNA-based technology that allowed scientists to target the SARS-COV-2 spike protein. Using its genetic blueprint, scientists engineered shots that train our immune systems to recognize the spike protein and mount a defense should we ever come in contact with the virus itself. The mRNA platform could yield all sorts of therapies and vaccines for a variety of diseases.

Kevin M. Folta is a professor in the Horticultural Sciences Department at the University of Florida. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Follow him on Twitter @camjenglish

When the faster-spreading and more virulent COVID-19 mutant came to my home town, it shook up everyone. Here’s an explainer of what it foreshadows

When a new variant of the COVID-19 virus appeared in the UK as 2020 drew to a close, I didn’t think it would show up a half hour’s drive from my home in the somewhat remote village in upstate New York soon after. The first cases were near Denver and in San Diego, and then traced to a jewelry store on Broadway in Saratoga Springs. My husband and I felt rather insulated and isolated here, hours from New York City.

The legacy of Caffe Lena

Earlier this year, I received an email from the executive director of Caffe Lena, the oldest coffeehouse in the US. Don McLean debuted “American Pie” there, Arlo Guthrie first tried out “Alice’s Restaurant,” and Bob Dylan and many others have commanded the iconic tiny stage in the small, homey establishment that opened in 1960.

The café is now in “Safe Mode,” with even the fabulous online events it has held throughout the pandemic too risky to record. The one-month shutdown follows the death January 12 from COVID of Matt McCabe, owner of Saratoga Guitar and frequent performer at the coffeehouse. The opening image captures his final show, in December.

Matt McCabe plays his last set at Caffe Lena in December. He passed away from COVID on January 12. Credit: Sarah Craig

The last time my husband and I had been to Saratoga was to dine outside Hattie’s Chicken, next door to Caffe Lena. It was that fabulously warm wonderful November Saturday when the election results were in and we felt the first faint glimmers of hope return. We watched as a few musicians hauled their instruments up the steps of the recently-refurbished Caffe. I don’t know whether Matt McCabe had the new variant of SARS-CoV-2. But now that Caffe Lena is stopping online broadcasts, I’ll have more time to write, so thought I’d explain the confusing distinctions that have made a scary pathogen even scarier: mutants, variants, and strains.

What exactly are the new guises of SARS-CoV-2? And where did they come from?

A quick science lesson

Nucleic acids – DNA and RNA – are long strings of building blocks that impart meaning. Triplets of DNA or RNA bases encode the amino acids that link into proteins, and proteins underlie traits.

The sequences of nucleic acids can change, when the molecules copy themselves, like perpetuating a typo in a document. Not all changes to RNA or DNA affect the encoded protein, but if they do, they can alter the corresponding trait. For a virus, that might be ease of transmission to a new host, strength of binding to receptors dotting the host’s cells, or hiding from the immune response.

Once a mutation happens, two major factors – a founder effect and natural selection – influence the trajectory of its spread. Both can unfold at once, as is the case for SARS-CoV-2 right now.

Chance and selection fuel change

A founder effect is when a mutation or mutations arrive at a new location through chance sampling. That’s how the UK variant ended up in Saratoga Springs.

Several variants of the virus were circulating in the UK during the fall, genomic surveillance just beginning to pick them up, when a man unwittingly got on a plane and headed for Albany. A few days later, he was shopping for a Christmas gift at the jewelry store on Broadway in Saratoga Springs, and became the first person to harbor the new UK variant in New York state. Might he have, perhaps circuitously, infected Matt McCabe?

A founder effect can deliver a novel viral variant on a smaller scale than an airplane, too. A person might harbor several versions of SARS-CoV-2, yet only one jumps to another person in a sneeze or cough.

Soon after founder effects brought new variants here, natural selection unfolded too. It’s survival of the fittest: if a new mutation benefits the virus, it persists.

The UK variant, called B.1.1.7, consists of several mutations that enable viruses to copy themselves 71% faster, spread from host-to-host more readily, and bind more tenaciously to our cells.

Credit: New York Times

Mutations, variants, and strains, oh my!

New versions of SARS-CoV-2 differ genetically, but the degrees of difference are confusing. Here’s clarification:

  • A mutation substitutes one type of RNA base for another at a single place in the 30,000-base viral genome. If the change alters the encoded amino acid in a way that changes the virus’s action, natural selection can favor it. To a geneticist, “gene mutation” and “gene variant” are synonymous.
  • A variant to an epidemiologist is broader, meaning the viral genome has something different, arising from one or more mutations. B.1.1.7 harbors nine mutations in the spike protein gene alone, the part that the immune response “sees.”
  • A strain is even broader, denoting a variant that has a telltale observable or measurable trait or behavior. A new strain may emerge from gene interactions.

It’s hard to keep up with the ever-mutating virus, and I can’t describe them all in one post. But here’s a description of three new faces of the virus that have dominated newsfeeds: D614G, a mink mutation, and B.1.1.7.

“D” versus “G” virus

The first notable COVID mutant, called D614G, popped up in several parts of Europe by early March, and then hopped planes to the US. Because it spreads more readily than whatever it mutated from, it’s taken over. D614G may have seeded Europe from China in January through a founder effect, but once there, it likely spread, fast, under powerful natural selection.

“The mutation was concerning because it looked like something new was taking over in multiple places. In evolution, that’s a strong clue that it might confer an advantage for the virus, that natural selection is at play,” said Adam Lauring, of the University of Michigan Division of Infectious Diseases in a JAMA webinar January 2. Modeling and cell experiments indicated natural selection rather than a series of founder effects.

Anthony Fauci added perspective on D614G. “RNA viruses mutate, that’s been known forever. The overwhelming majority of mutations are without any functional significance. Every once in awhile, we get one that is,” and that’s the case for the single amino acid change at position 614 in the spike, the part of the virus that binds to ACE2 receptors on many human cell types. While the mutation spreads more easily and binds tighter to receptors, the antibodies that the vaccines elicit do attack it, he reassured. But spreading more easily means more opportunities to mutate.

What D614G enables the virus to do is a little like strengthening the tailhook of a fighter plane, making it better able to latch onto a cable on the deck of an aircraft carrier. The mutation affects a specific spot in the spike, where its “receptor binding domain” intersects a loop of amino acids that reverberates like a trap door. Once a spike grabs on, a second part of it clamps down onto the cell membrane and pushes the virus through. We’re infected.

Here’s what “D614G” means in biochemical shorthand: a single RNA base change corresponding to the 614th of 1277 amino acids that comprise the spike. The mutation alters an aspartic acid (“D”) to a glycine (“G”). (Biochemistry convention represents each of the 20 amino acids with a single letter.)

The jargon behind D614G is abbreviated further to the “D” and “G” strains of the virus. D is the slowpoke, while newbie G copies itself more readily, boosting a person’s viral load so that denser clouds of virus are exhaled.

Credit: Bette Korber/Cell

But the D614G mutation may eventually exert greater significance in terms of epidemiology. A viral variant that passes to more people (ups the “R naught” value) elevates the percentage of a population that must become immune (through infection or vaccination) to achieve herd immunity. I’ll save that for another post.

The mink mutation

Next came minks. Reports from the Netherlands in the spring, and in Denmark in June with autumn resurgence, pointed to mink farms. Human workers may have initially given the virus to the minks, which then returned the favor, in changed form.

The mink mutation also alters the spike protein, and is dubbed N453Y – a change from tyrosine to phenylalanine at amino acid position 453. Studies on people who’ve recovered from COVID show that sometimes their antibodies aren’t as able to neutralize the new version of the virus, while the spikes bind more strongly to our receptors.

“The change is modest, so I don’t think it will compromise vaccines. But the big concern is a virus established in another host species where it can evolve and spill back into humans,” Lauring said. That’s why many countries are culling minks.

The UK and South African variants

The evolutionary tree diagrams that depict relationships of related species, like hippos, whales, pigs, and peccaries, are also used to track changes in viral genomes. The trees are derived from comparing DNA or RNA sequences, sometimes using mutation rates to estimate times of divergence from shared ancestors.

I think of evolutionary trees when my laptop freezes and I have to rescue the document from every time I’d hit “save.” I compare all the versions that suddenly overlap on my screen to deduce the order, from first draft to most recent.

Researchers similarly upload new SARS-CoV-2 genome sequences to the database GISAID, where bioinformatics tools convert the data into branches, called lineages or phylogenies, of evolutionary trees. (See COVID Genomes Paint Portrait of an Evolving Pathogen, here at DNA Science.)

B.1.1.7 harbors a “signature” of 14 mutations. The most notable is N501Y (asparagine changed to tyrosine at position 501).

Blasting through the UK and South Africa and beyond with astonishing speed, N501Y clearly has an advantage – that’s natural selection at work, not a slower founder effect. The fact that N501Y arose independently in the UK and South Africa also points to natural selection. “This virus grows faster and it started later, and has spread rapidly, doing a lot better than its cousins. We should pay attention,” warned Lauring.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

The UK variant has probably been more widespread here and we didn’t know it simply because we weren’t looking. “The UK has a massive operation. They sequence 10% of all SAR-COV-2 tests; in the US, we sequence probably way less than 0.5%. Other countries that also don’t do a lot of sequencing are now finding it because they’re looking harder for it now,” Lauring said.

Although B.1.1.7 doesn’t make people sicker, its more rapid spread means that more people will become infected – and the sheer numbers will send more patients to already overburdened hospitals. It’s deja vu all over again from D614G.

Preliminary studies from South Africa, where people who’ve been sick are becoming reinfected with the new variant bearing the N501Y mutation, indicate that the natural immune response isn’t making sufficient neutralizing antibodies. Vaccines are likely to offer better protection because they’re designed to coax the body to make a wider array of antibodies, tackling spikes at several points and from several angles. But if vaccine efficacy is even slightly lower than expected from testing before the new variants arrived, that’ll raise requirements for herd immunity.

Coda

The battle of humanity against the novel coronavirus seems never-ending. Evolutionary trees reveal that SARS-CoV-2 jumped to us from bats only recently – so its genome is still adapting to our bodies. The virus is a moving target.

January 20, New York’s governor Andrew Cuomo addressed the future of the changeling virus. As he reported two more cases of B.1.1.7 from Saratoga Springs, he said “it is just a matter of time” and “a matter of probability” until we will face a more deadly or vaccine-resistance coronavirus. Next week I’m writing about a machine learning algorithm that predicts mutations.

Meanwhile, what must we do, before nature attenuates SARS-CoV-2 into just another cause of the common cold? That could be years from now. Continue with the tried-and-true, if uncomfortable, public health measures – social distancing, hand-washing, and wearing masks.

Ricki Lewis has a PhD in genetics and is a science writer and author of several human genetics books. She is an adjunct professor for the Alden March Bioethics Institute at Albany Medical College. Follow her at her website or Twitter @rickilewis

Robert F. Kennedy, Jr: Environmental lawyer partners with Church of Scientology in anti-vaccine and anti-GMO activism

rfkjr headshot
Robert Francis Kennedy, Jr. (born 1954) is the third child and second son of the late Attorney General, US Senator and Presidential candidate Robert F. Kennedy (RFK). He is an anti-vaccine, anti-GMO and anti-pesticide litigator who espouses health and environmental claims that stand outside mainstream science. He promotes his views through his nonprofit, Children’s Health Defense, formerly the World Mercury Project. It was founded in 2007 and re-branded in 2018. It opposes vaccines, mercury usage in dentistry and chemical/pesticide use. Kennedy was part of the plaintiff’s litigation team in a 2018 lawsuit alleging Bayer’s weedkiller Roundup (glyphosate) causes cancer. In addition to his anti-vaccine, anti-GMO advocacy, he claims 5G wireless causes cancer and other health issues, and has embraced numerous other fringe conspiracy theories.[1]

Since the outbreak of the COVID-19 pandemic, Kennedy has repeatedly argued that all vaccines are “unavoidably unsafe”, expressing his belief that various vaccines to control the pandemic are dangerous. He has consistently attacked Bill Gates, who he accuses of masterminding a global effort to fund vaccine research in a secret plot to assume “dictatorial control of global health policy.” In August 2020, Kennedy aligned himself with radical U.S. right-wing and European extremist groups spreading conspiracy claims that the coronavirus pandemic is “one big lie” by governments and multinational corporations led by Bill Gates to enslave the public via vaccine dependency and technological tracking of their activities.[2] In February 2021, Instagram banned Kennedy “for repeatedly sharing debunked claims about the coronavirus or vaccines,” a spokesperson for Facebook, which owns Instagram, said in a statement.screen shot at pm

In August, 2020, Kennedy filed suit in federal court in California, alleging that Facebook’s fact-checking program for scientific or medical misinformation, which led them to limit anti-vaccination posts by Kennedy and other groups, violates his constitutional rights. A recent study had found that Kennedy, through his CHD organization, were responsible for more than half of the anti-vaccine advertisements on Facebook when they were permitted. Facebook removed several anti-vaccine videos and promised to stop recommending anti-vaccine pages in addition to including a label at the top of the Children’s Health Defense’s (CHD) Facebook page which informs users that “this page posts about vaccines.” Facebook also included a link to the Center for Disease Control and Prevention’s website at the top of the CHD’s Facebook and removed the group’s ability to fundraise on the social media platform. The Kennedy-led suit (PDF) claimed that Facebook, CEO Mark Zuckerberg, and the organizations Science Feedback, Poynter and PolitiFact acted “jointly or in concert with federal government agencies” to infringe on CHD’s First and Fifth Amendment rights. The suit also alleged Facebook and the fact-checking organizations colluded to commit wire fraud by “clearing the field” of anti-vaccine ads. The suit is pending, but several American judges have dismissed similar lawsuits in the past, arguing that social media companies are not bound by the First Amendment and have every right to censor users that violate its content policy.

Career

After graduating from law school and passing the New York bar, Kennedy became an assistant district attorney in New York City. From 1984 until 2017, he was a board member and chief prosecuting attorney for Hudson Valley Riverkeeper, which advocated (and litigated) for cleaning up pollution from the Hudson River. He became a staff member of Riverkeeper after serving as an intern as restitution as part of his 1983 court sentence for heroin possession. [3]

From 1986 until 2017, Kennedy was a senior attorney for the Natural Resources Defense Council (NRDC), an environmental activist group known for its opposition to chemicals and biotechnology, specifically the genetic engineering of crops. Kennedy is the president of the board of Waterkeeper Alliance, a non-profit environmental group that he helped found in 1999. For more than thirty years, Kennedy has been an adjunct professor of Environmental law at Pace University School of Law. Until August 2017, he also held the post as supervising attorney and co-director of Pace Law School’s Environmental Litigation Clinic, which he founded in 1987.[4] He is currently professor emeritus at Pace.[5]

Kennedy co-hosts Ring of Fire, a nationally syndicated American radio program for plaintiffs’ bar litigation issues.

f a f e c e c cb

Education

  • Attended Georgetown Preparatory School, a boarding school in Bethesda, Maryland
  • Graduated from the Palfrey Street School in Massachusetts
  • Graduated from Harvard College – 1976 BA American History and Literature
  • Graduated University of Virginia School of Law – JD
  • Graduated Pace University School of Law – Master of Laws

Advocacy

Environment and links to the Church of Scientology

Beginning with his internship at Riverkeeper, Kennedy became an attorney-advocate on environmental issues. One of his first cases with the firm was litigation against General Electric for PCB pollution of the Hudson River.[6] Kennedy co-founded EcoWatch, an environmental news site. In 2000, he also co-founded the law firm Kennedy and Madonna, which litigates environmental pollution cases; its most recent case involved a methane gas blowout at a facility in Porter Ranch, California. After that case, Kennedy took a position as co-counsel for the law firm Baum Hedlund Aristei & Goldman, which was founded by and continues to be managed by adherents of the Church of Scientology. He became involved in litigation against the manufacturer of Gardasil, the human papilloma virus (HPV) vaccine designed to prevent cervical cancer. [7] The firm has become instrumental in other environmental litigation cases, including lawsuits against the former Monsanto Corp. involving the herbicide glyphosate. In 2017, Kennedy resigned from Riverkeeper, citing his new residence on the west coast and his work with other advocacy groups.[8]

Anti-vaccination

In 2003, Kennedy began writing articles and making statements in opposition to vaccines. [9] Since the mid-2000s, he has consistently claimed that vaccines are linked to autism in children, an allegation first made by a discredited British physician named Andrew Wakefield in a now-retracted paper published in The Lancet.[10]  Kennedy has focused his advocacy on thimerosal, a compound of mercury that had been added to vaccine formulations to prevent contamination. [11] Thimerosal has never been shown to cause harm, but the US Centers for Disease Control and Prevention and the American Academy of Pediatrics asked vaccine makers to remove thimerosal from vaccine formulations. Even though Kennedy began making claims of a vast, international conspiracy to poison children with thimerosal-laden vaccines in 2005, the compound had been removed from most immunizations beginning in 2001. [12]

As far back as 2015, Kennedy has promoted the debunked link between vaccines and autism, claiming in numerous forums that vaccines were causing a “holocaust” in the United States and other western countries. That echoed claims made by the Church of Scientology and the Nation of Islam. Through the Children’s Health Defense, he has advocated against vaccines, claiming “fraud and corruption within the CDC and the pharmaceutical industry.” [13] Kennedy says he was “fighting multiple lawsuits on behalf of Riverkeeper and Waterkeeper against coal-fired power plants” when he was also speaking about “the dangers of mercury emissions, which, by then, had contaminated virtually every fresh water fish in America.” [14]

Through Children’s Health Defense, Kennedy assumed the self-appointed role of “vaccine safety” advocate. Most of his efforts consist of continued claims that vaccines contain thimerosal, which contains mercury. Mercury is indeed a neurotoxin, and children chronically exposed to mercury from food suffer delays in the development of their nervous systems. But the chemical formula of mercury in thimerosal is very different from mercury that causes health problems.[15]

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Kennedy’s claim that vaccines are unsafe because of thimerosal minimizes nine studies funded or performed by the CDC since 2003, as well as a 2000 safety review by the Institute of Medicine that found no health risks from the compound. [15] As for autism, only the Wakefield study showed any results connecting vaccines to autism. The study has since been retracted, and Wakefield has lost his medical license for committing fraud.[10]

In 2017, Kennedy announced that he would be the head of a panel on vaccine safety, established by newly inaugurated President Donald J. Trump. The White House transition team never formed the panel.[16]

Kennedy has been the keynote speaker at many high-profile anti-vaccine events, including a joint 2013 conference put on by AutismOne and Generation Rescue, both well-known anti-immunization groups. [17], [18] Generation Rescue is headed by anti-vaccine advocate and actress Jenny McCarthy, and has issued statements against vaccines that misinterpret science. These include the claim that vaccination gave her son autism, that vaccines contain “toxins” including mercury, ether, antifreeze and aborted fetal tissue, and that she cured her son with a gluten- and casein-free diet.[19] AutismOne has issued written support for discredited researcher Andrew Wakefield, even inviting him to speak at their conference. [20]

A study published in the journal Vaccine in 2020 found that two buyers (Kennedy’s World Mercury Project–now Children’s Health Defense, and the Californian group Stop Mandatory Vaccination) were the purchasers of 54 percent of anti-vaccine advertising on Facebook. The study by researchers at the University of Maryland, George Washington University and Johns Hopkins University was conducted before Facebook changed its policies on allowing anti-vaccine advertising. [21]

In August 2020, Kennedy spoke at an outdoor rally in Berlin, protesting German Chancellor Angela Merkel’s actions aimed at reducing exposure to SARS-Cov2, the virus causing COVID-19. The rally, consisting of a hodgepodge of organizations which included right-wing extremists and even Neo-Nazis, heard Kennedy warn against the 5G cellular network and Microsoft founder Bill Gates. With reference to his uncle’s visit to Berlin in 1963, he said: “Today Berlin is again the front against totalitarianism.” [2]

Glyphosate

Glyphosate, the active ingredient in the weedkiller Roundup (first manufactured by Monsanto), has long been targeted by anti-GMO activists because several genetically modified crops (corn, soy and cotton) have been developed to withstand applications of the herbicide.

jfk monsanto
RFK JR Part of Baum Hedlund Litigation Team

Through the Church of Scientology law firm Baum Hedlund, Kennedy has sued Monsanto (now owned by Bayer) based on the claim that glyphosate causes non-Hodgkin’s lymphoma (NHL), a rare form of cancer, in workers who have applied the weedkiller. This claim is based on a 2015 monograph from the International Agency for Research on Cancer (IARC), which declared glyphosate to be a “probable” carcinogen for workers, although it found no demonstrable dangers to the general public from trace exposure in food. The monograph has been heavily criticized for its sloppy methodology; concerns have also been raised that several authors of the report had financial ties to the law firms that sued Monsanto after IARC published its findings. No other major federal hazard or risk agency in the world has concurred with the IARC findings, with many issuing direct rebuttals.[22] 

 

Despite these limitations, Kennedy (and other lawyers) have leveraged the IARC ‘hazard’ finding to trigger the Daubert rule, which is a legal doctrine that allows personal injury lawsuits to be filed once a threshold of scientific information has been reported on the substance in question. Kennedy explains his justification for suing over glyphosate to Dr. Mark Hyman, a physician and wellness advocate as well as an opponent of the use of genetically modified crops and food:[23]

In 2018, Baum Hedlund scored the first victory against glyphosate, with a $289 million verdict against Monsanto accused of giving groundskeeper DeWayne Johnson non-Hodgkin’s lymphoma. The jury decided that glyphosate more likely than not contributed to his cancer; the jury did not focus on studies showing that the causes of lymphomas are unknown; nor did they consider extensive research as noted by 16 other major research groups that there concluded there is no demonstrable link between the weedkiller and non-Hodgkin’s lymphoma. The verdict has since been reduced by other courts.

In 2020, Kennedy claimed in an article for Children’s Health Defense [24] that glyphosate and vaccines are responsible for rising obesity rates, particularly in children. The article quoted “studies by immunologist JB Classen showing vaccine induced immune overload” as a primary cause of childhood obesity. JB (John Bartholomew) Classen is an anti-vaccine activist who claims that “immunization causes a large number of other chronic diseases, including autism, diabetes, metabolic syndrome, autoimmune diseases, allergies, asthma, cancers and Gulf War Syndrome.” Kennedy then connected obesity with glyphosate by quoting MIT computer scientist Stephanie Seneff and co-author Anthony Samsel, both well-known anti-GMO and anti-glyphosate advocates with no experience in toxicology or epidemiology. The pair has alleged that glyphosate is the cause of “obesity as well as numerous other toxic conditions.”[25] While rates of obesity have indeed risen significantly since the 1980s, reputable studies have shown no connection with vaccines or glyphosate [25].

Affiliates

Quotes and Claims

“As an attorney and environmentalist who has spent years working on issues of mercury toxicity, I frequently met mothers of autistic children who were absolutely convinced that their kids had been injured by vaccines. — “Deadly Immunity,” Rolling Stone/Salon 2005 .

“Our public health authorities knowingly allowed the pharmaceutical industry to poison an entire generation of American children, their actions arguably constitute one of the biggest scandals in the annals of American medicine.” — “Deadly Immunity

“When I started reading about thimerosal, I was dumbstruck by the gulf between the scientific reality and the media consensus. All the network news anchors and television doctors were assuring the public that there was not a single study that suggested thimerosal was unsafe or that it could cause autism. After a short time on PubMed, I’d identified many dozens of studies suggesting that thimerosal causes autism and a rich library of peer-reviewed literature—over 400 published studies—attesting to its deadly toxicity and its causal connection to a long inventory of neurological injuries and organ damage.”  — Interview, Children’s Health Defense

“In fact, Cheerios have more glyphosate per serving than vitamin D and vitamin B12 which are added to enrich the cereal. It’s even been in commercial honey. So, it’s a big problem. It’s linked to cancer, it’s linked to all these health issues.” — Interview with Dr. Mark Hyman, July 2020

Criticisms

  • RFK Jr. Is Our Brother and Uncle. He’s Tragically Wrong About Vaccines, Politico, By Kathleen Kennedy Townsend, Joseph P. Kennedy II and Maeve Kennedy McKean, May 8, 2019
  • Anti-Vaxxer RFK JR. joins neo-Nazis in massive Berlin ‘Anti-Corona’ Protest by Daily Kos, August 2020 – “Tens of thousands ‘Corona-Truthers’ descended on Berlin today to protest the measures implemented by Angela Merkel and her government to prevent the coronavirus spread… The protest was organized by right-wing extremist organizations- including the AfD party and various anti-Semitic conspiracy groups as well as the neo-Nazi NPD party. Among the speakers was Robert F. Kennedy Jr.. who warned against the “totalitarianism” of Angela Merkel… Protester were seen carrying poster urging “Trump, Please Help” with the QAnon logo.” (Note: RFK, Jr had been tapped by Donald Trump to lead a White House panel inquiry into vaccine safety at the beginning of his presidency. The Panel was never convened.)
  • Robert Kennedy Jr, Antikommunist Neus Deutschland, August 2020 – “Robert F. Kennedy whipped up a mass of anti-coronavirus opponents, Nazis, conspiracy theorists and eso-hippies in Berlin. The 66-year-old received a lot of applause for his crude theses – for example that the corona pandemic had been planned for decades and would be used to introduce a digital currency that marked the beginning of slavery… (Kennedy) supported a conspiracy-theoretical pamphlet by several bishops, in which it is claimed that the measures to contain the pandemic are the “prelude to the creation of a world government beyond control…”
  • Robert F. Kennedy Jr: Anti-Vaxxer, June 5, 2013 – “RFK Jr. has a long history of adhering to crackpot ideas about vaccines, mostly in the form of the now thoroughly disproven link to autism. He’s been hammering this issue for a decade now, and his claims appear to be no better and no more accurate now than they were when he first started making them.” Phil Plait, Slate.
  • In 1995, Premier Ralph Klein of Alberta declared Kennedy persona non grata in the province due to Kennedy’s activism against Alberta’s large-scale hog production facilities.
  • In 2002, a federal judge dismissed a class action lawsuit against Smithfield Foods, Incfiled by a coalition of plaintiffs’ lawyers led by Robert F. Kennedy, Jr., and in a rare move, ruled that Kennedy and the other lawyers must pay Smithfield’s costs and legal fees. At a series of news conferences in 2001 Kennedy announced his intention to use this and other lawsuits as a means of “shutting down Smithfield’s farm operations.”

Personal

Kennedy has been married three times and publicly reported sex and drug addition problems. He was arrested for heroin possession in 1983. He’s been alleged to have sought to bribe journalists to cover up reports of his and his relatives disreputable behavior.[26]

References

In the midst of the coronavirus pandemic, Daniel Defoe’s account of London’s 1665 bubonic plague offers a shock of recognition

kabat
Pandemics have punctuated recorded history going back to ancient Greece and Egypt. However, the novel coronavirus pandemic is unfolding in a world that is qualitatively different due to densely-populated cities, long-distance air travel, and modern medicine and genomics. So it is understandable that we assume that our experience of a pandemic in the 21st century could have little in common with that of periods predating antibiotics, vaccines, and the germ theory of disease.

But it is hubris to think that material and technological progress makes our era totally discontinuous with the past, and that the experience of epidemics of plague, cholera, and other diseases that were a regular occurrence until recently has nothing in common with what we are experiencing.

We are in the midst of what Ed Yong of The Atlantic termed a “patchwork pandemic” – characterized by different geographic areas, different population groups, and different historical legacies. While the world awaits the development of an effective vaccine as well as treatments that can tamp down the ravages of a capricious virus, public health officials are exhorting us to rely on the most rudimentary, age-old tools for keeping the virus at bay – wearing a mask, hand-washing, and social distancing and lockdowns – in other words, treating people we don’t know as potential threats. Every virus and every bacterium has its distinct personality, and, yet, looking back at the history of pandemics, the ways in which human societies have responded to the upheaval and terror provoked by a poorly-understood microorganism have striking commonalities.  For this reason, chronicles of disease outbreaks from the past can provoke a shock of recognition.

Daniel Defoe, author of the classic Robinson Crusoe, is also renowned for his classic description of an epidemic, A Journal of the Plague Year. Defoe was five years old when the bubonic plague came to London in 1665. He must have heard stories of the plague as a child, and in 1722 he published a gripping account of life during the plague. His Journal is a sleight-of-hand. Though actually a work of fiction written more than fifty years after the events, it presents itself as a contemporaneous, first-hand, neighborhood-by-neighborhood, eye-witness narrative of what life was like during the “visitation” by the plague. For his chronicle Defoe drew on a small library of contemporaneous accounts.

london
Frontispiece of original edition of A Journal of the Plague Year, 1722.

Owing to the vividness and immediacy of the narrator’s description of the effects of the epidemic on ordinary people in this street or that neighborhood of the city, the Journal has become the most famous account of the Great Plague of London, displacing contemporaneous accounts by actual witnesses.

A failed businessman-turned-journalist, who started writing fiction later in life, Defoe originated a new style of writing that dispensed with aristocratic literary conventions, relying instead on empiricism and realism. His narrator tells us that his journal is based only on what he has observed directly in his walks about the city, what he has heard from credible persons, and the weekly “bills of mortality” published by the city of London. On occasion, he refers to events which he feels obliged to report but which he can’t vouch for.

As in Robinson Crusoe, the narrator of the Journal finds himself in a situation in which he must summon up all his wits and energy to survive an overpowering, incommensurate threat.  While concerned for his own safety and his business, his single-minded focus is on the impact of the plague on the city of London, which is his protagonist.  We are at his side as he describes what he sees as he moves about the city and provides his coordinates, which would have been familiar to any Londoner of the 18th century – “that is to say, in Bearbinder Lane, near Stocks Market.”  The city’s inhabitants are characterized only insofar as they are affected by the “distemper” and make decisions about how to respond to it.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

At first the plague, which the narrator tells us has come to London from Holland, manifests itself by a few isolated cases in the winter of 1664-65. But in February it flares up in parishes to the west of the city and gradually makes its way eastward, methodically visiting formerly untouched parishes. In the course of a year, it has reached every corner of England. Early in the outbreak the wealthy flee the city with their servants to their country houses, and the narrator, who initially considers fleeing, remarks that there were no horses left in the city.

In spite of his belief in Providence, the narrator emphasizes that transmission of the infection requires close personal contact, often within families, or with contaminated belongings, food, or cargo.  Although the plague may have been sent by a Divine power to punish men for their sins, he makes clear that natural causes are entirely sufficient to account for the spread of the disease and its effects on its victims.

I must be allowed to believe that no one in this whole nation ever received the sickness or infection but who received it in the ordinary way of infection from somebody, or the clothes or touch or stench of somebody who was infected before.1

He describes the high transmissibility of the plague (which we now know to be caused by the bacterium Yersinia pestis, which was spread by fleas that fed on the  black rat) and the unbearable sensitivity and pain caused by its pathognomic feature – buboes – which, he tells us, drove sufferers to throw themselves out of windows or into the Thames.  He also notes that asymptomatic cases could spread the infection and that the disease can manifest differently in different people. Houses where people took sick were shut up and padlocked by order of the magistrate, and watchmen were posted outside day and night to insure – not always successfully – that the imprisoned could not escape. The narrator describes the pitiful cries that were heard from the street as family members discovered that a loved one had succumbed to the plague.  Others, he tells us, died in the street. The bodies of the deceased were collected at night and taken to pits dug in churchyards or in open lots and buried en masse.

Defoe’s narrator describes the desperate condition of the poor, who, thrown out of work, could not buy food or other necessities for their families.  In this situation, he tells us, they had no choice but to perform the most dangerous jobs created by the epidemic – tending to the sick and collecting and burying the dead. He notes that “the plague, which raged in a dreadful manner from the middle of August to the middle of October, carried off in that time thirty or forty thousand of these very people.” Once the plague makes itself felt, the common people, who are keenly attuned to astrological signs and portents, are desperate to ward off its spread and chase after an abundant array of fake cures and elixirs:

[The common people] … were now led by their fright  to extremes of folly; … they ran to conjurors and witches, and all sorts of deceivers, to know what should become of them (who fed their fears, and kept them always alarmed and awake on purpose to delude them and pick their pockets) so they were as mad upon their running after quacks and mountebanks, and every practicing old woman, for medicines and remedies; storing themselves with such multitudes of pills, potions, and preservatives, as they were called, that they not only spent their money but even poisoned themselves beforehand for fear of the poison of the infection.2

Throughout his chronicle, the narrator anxiously scrutinizes the weekly “bills of mortality” published for each parish to gauge the progress of the infection.  He knows from observing what is going on around him that the numbers of deaths attributed to the plague are grievously under-reported due to relatives’ fear of being stigmatized and the authorities’ connivance.  He tries to judge the true magnitude of the deaths from the plague by examining increases in other causes of death and by comparing the overall death rate in a parish before the outbreak to the numbers when the plague was present all around him.

london
Bill of mortality for a week summarizing deaths from all London parishes.

Deaths increased at an extraordinary rate in July and August, reaching a peak in September when, we are told, each week there were 8,000 or 9,000 deaths from the plague, and this he considers an under-estimate.  Thereafter, the deaths began to decline precipitously, and the outbreak abated. According the bills of mortality, 68,590 people died of the plague, while the narrator claims that the total reached 100,000.  This amounts to twenty percent of the population.

defoe
Graph of weekly deaths from the plague, 1665-66, shows the peak occurring in September. From Samuel Pepys’s Diary.

Looking back at the events after a half-century, Defoe knew the outcome of the “visitation,” and his account has a taught unity of time and place.  Here in the United States, five months into the SARS-CoV-2 pandemic, many observers are watching in horror and disbelief as the virus spirals out of control in states in the South and West that failed to take it seriously and refused to learn from the experience of other states, and countries, that succeeded in bringing their outbreaks under control.  At the same time, some states and countries that successfully broke the chain of transmission, are experiencing resurgences.

ft july confirmed cases

For all the differences between the London plague and our pandemic, there are striking commonalities. As in Defoe’s London, the poor and the weak are disproportionately exposed to the coronavirus in low-income neighborhoods, close living-quarters, and low-wage jobs.  As in Defoe’s London, many people refuse to follow common-sense precautions, instead falling for quackery and scientifically unsupported treatments. As in London, statistics regarding the number of cases and deaths from Covid-19 are manipulated and misinterpreted to suit the narrative of different parties.

On the most basic level, what we share with the London outbreak is the massive, sudden upsurge in morbidity and mortality, and the inescapable sense that a pathogen is beyond our control. In London of 1665 the bills of mortality were widely distributed so that residents could know the situation in their parish, and neighbors shared the latest news of who had fallen ill and died.  In 2020, normal life has been replaced by a profusion of images, charts, statistics, and stories, which have flooded the media since March.  These convey the fever chart of the epidemic in different places together with stories of intensive care units stretched to capacity and the bios of individuals who have been lost.  At the same time, we are subjected to a constant flow of interviews with health care workers, epidemiologists, and public health officials, who interpret the day-to-day trends in an effort to explain where things are headed.

Although we pride ourselves on being modern and are used to thinking that we have control over our lives, at the present moment we have to admit that we have no idea of how this “visitation” will play itself out.

References:
  1. Daniel Defoe, A Journal of the Plague Year, Penguin edition, 1966, p. 206.
  2. A Journal, p. 30.

Geoffrey Kabat is an epidemiologist and the author, most recently of Getting Risk Right: Understanding the Science of Elusive Health Risks. Geoffrey can be found on Twitter @GeoKabat

Eerily similar? Examining fates of the rich and poor during COVID-19 and 14th century Black Death pandemics

image

The coronavirus can infect anyone, but recent reporting has shown your socioeconomic status can play a big role, with a combination of job security, access to health care and mobility widening the gap in infection and mortality rates between rich and poor.

The wealthy work remotely and flee to resorts or pastoral second homes, while the urban poor are packed into small apartments and compelled to keep showing up to work.

As a medievalist, I’ve seen a version of this story before.

Following the 1348 Black Death in Italy, the Italian writer Giovanni Boccaccio wrote a collection of 100 novellas titled, “The Decameron.” These stories, though fictional, give us a window into medieval life during the Black Death – and how some of the same fissures opened up between the rich and the poor. Cultural historians today see “The Decameron” as an invaluable source of information on everyday life in 14th-century Italy.

Boccaccio was born in 1313 as the illegitimate son of a Florentine banker. A product of the middle class, he wrote, in “The Decameron,” stories about merchants and servants. This was unusual for his time, as medieval literature tended to focus on the lives of the nobility.

waterhouse decameron“The Decameron” begins with a gripping, graphic description of the Black Death, which was so virulent that a person who contracted it would die within four to seven days. Between 1347 and 1351, it killed between 40% and 50% of Europe’s population. Some of Boccaccio’s own family members died.

In this opening section, Boccaccio describes the rich secluding themselves at home, where they enjoy quality wines and provisions, music and other entertainment. The very wealthiest – whom Boccaccio describes as “ruthless” – deserted their neighborhoods altogether, retreating to comfortable estates in the countryside, “as though the plague was meant to harry only those remaining within their city walls.”

Meanwhile, the middle class or poor, forced to stay at home, “caught the plague by the thousand right there in their own neighborhood, day after day” and swiftly passed away. Servants dutifully attended to the sick in wealthy households, often succumbing to the illness themselves. Many, unable to leave Florence and convinced of their imminent death, decided to simply drink and party away their final days in nihilistic revelries, while in rural areas, laborers died “like brute beasts rather than human beings; night and day, with never a doctor to attend them.”

file yyd t
Josse Lieferinxe’s ‘Saint Sebastian Interceding for the Plague Stricken’ (c. 1498). Credit: Wikimedia Commons

After the bleak description of the plague, Boccaccio shifts to the 100 stories. They’re narrated by 10 nobles who have fled the pallor of death hanging over Florence to luxuriate in amply stocked country mansions. From there, they tell their tales.

One key issue in “The Decameron” is how wealth and advantage can impair people’s abilities to empathize with the hardships of others. Boccaccio begins the forward with the proverb, “It is inherently human to show pity to those who are afflicted.” Yet in many of the tales he goes on to present characters who are sharply indifferent to the pain of others, blinded by their own drives and ambition.

In one fantasy story, a dead man returns from hell every Friday and ritually slaughters the same woman who had rejected him when he was alive. In another, a widow fends off a leering priest by tricking him into sleeping with her maid. In a third, the narrator praises a character for his undying loyalty to his friend when, in fact, he has profoundly betrayed that friend over many years.

Humans, Boccaccio seems to be saying, can think of themselves as upstanding and moral – but unawares, they may show indifference to others. We see this in the 10 storytellers themselves: They make a pact to live virtuously in their well-appointed retreats. Yet while they pamper themselves, they indulge in some stories that illustrate brutality, betrayal and exploitation.

Boccaccio wanted to challenge his readers, and make them think about their responsibilities to others. “The Decameron” raises the questions: How do the rich relate to the poor during times of widespread suffering? What is the value of a life?

In our own pandemic, with millions unemployed due to a virus that has killed thousands, these issues are strikingly relevant.

Kathryn McKinley is a professor of English at the University of Maryland. Her research and teaching interests include Chaucer; Ovid, Boccaccio, and late medieval vernacularity; medieval visual literacy and material culture; and the history of later medieval European and English food culture, food scarcity, and famine. She has published in such journals as The Chaucer Review, Viator, and English Manuscript Studies 1100-1700. 

A version of this article was originally published at the Conversation and has been republished here with permission. The Conversation can be found on Twitter @ConversationUS

glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists