It’s not just humans that get COVID — other animals are susceptible too

Humans aren’t the only mammals susceptible to infection by, or testing positive for SARS-CoV-2. There have been instances among quite a few others. The first to ring alarms internationally was a small dog in Hong Kong. On Feb. 26, 2020, a 17-year-old male Pomeranian with a heart murmur, pulmonary hypertension, renal disease, and other secondary conditions, tested positive for the virus.

ProMED relayed this news to its global subscribers two days later. Many of us read it and thought, hmm, odd. By that time, because the Pomeranian’s owner had been sick for two weeks and tested positive herself, the dog was quarantined in a government-run facility. Throughout his quarantine period, the dog “remained bright and alert with no obvious change in clinical condition,” by one report, but his clinical condition already wasn’t too good. Anyway, he survived to bark again.

 

The second known pet was a young German shepherd, also in Hong Kong, also from a household with a human case.

The accompanying article is excerpted and adapted from “BREATHLESS: The Scientific Race to Defeat a Deadly Virus,” by David Quammen. Copyright ©2022 by David Quammen. Reprinted by permission of Simon & Schuster, Inc.

Next it was cats. A group of scientists in Wuhan began promptly in January 2020, as the outbreak among humans made headlines, testing the blood of domestic felines for signs of the virus. They gathered data through March and posted a preprint on April 3. This team included researchers from a college of veterinary medicine, and maybe they were simply following a hunch. They took blood samples from a total of 102 cats, including abandoned creatures harbored at animal shelters, cats at pet hospitals, and cats from human families in which Covid-19 had struck. (They also looked, for purposes of comparison, at 39 cat samples drawn before the outbreak, all negative.) They found evidence of the virus in 15 cats and, in 11 of those, strong evidence of antibodies capable of neutralizing the virus.

“Our data demonstrated that SARS-CoV-2 has infected cat population in Wuhan during the outbreak,” they wrote in the preprint. By the time their study appeared in a journal, other cats elsewhere had become infected.

A cat in Belgium tested positive. A cat in France tested positive. Another study from China, done by experiment at a veterinary institute in Harbin, in the north, showed that cats inoculated with SARS-CoV-2 became infected and could transmit the virus to other cats through the air. A cat in Hong Kong tested positive. A cat in Minnesota, a cat in Russia, two cats in Texas.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

In Italy, a cat named Zika began sneezing, then tested positive, evidently having caught the virus from her human, a young doctor working on Covid. In Germany, a 6-year-old female cat at a retirement home in Bavaria tested positive by throat swab after her owner died of Covid-19. In Orange County, New York, just up the Hudson River from New York City, a 5-year-old indoor cat started sneezing, coughing, draining from her nose and eyes, about eight days after her person developed similar symptoms. She tested positive.

Domestic cats aren’t social creatures in the ecological sense; they don’t aggregate in dense populations (except amid the pungent households of obsessive cat hoarders and overgenerous rescuers), so the opportunities for cat-to-cat transmission tend to be low. But many a house cat, once it gets outside, interacts with mice in the barn, the shed, or the backyard.

Those mice generally belong to two groups, house mice (Mus musculus) and deer mice (several species within the genus Peromyscus). Deer mice are well documented as hosts of hantaviruses and the Lyme disease bacterium, and recent laboratory work shows them susceptible to infection with SARS-CoV-2. A mouse can carry the virus for as long as three weeks and transmit it efficiently to other mice.

Deer mice are the most abundant (nonhuman) mammals in North America. It may be only a matter of time before SARS-CoV-2 gets into a population of deer mice, from a cat, and begins mouse-to-mouse transmission in the wild. More on this theme, below, when we get to the mink and the white-tailed deer.

A cat approaches a line of people as they wait for Covid testing in Zhengzhou, China in 2021. By the time the first study on cats infected by the coronavirus appeared in a journal, cats across the world had become infected. Visual: Ma Jian/VCG via Getty Images

Among felids infected with SARS-CoV-2, it hasn’t been just the domestic kitties: A tiger named Nadia, at the Bronx Zoo in New York, appeared sick and tested positive for the virus, presumably transmitted by one of her zookeepers. It seems she wasn’t alone. According to a statement from the Animal and Plant Health Inspection Service (APHIS, within the U.S. Department of Agriculture), Nadia’s testing came after several lions and other tigers at the zoo showed signs of respiratory distress. Within weeks, four more of the Bronx tigers and three lions tested positive. A puma (an American cougar) at a zoo in South Africa tested positive. A female snow leopard and two males, at the Louisville Zoo, in Kentucky, started coughing and wheezing, then tested positive.

In the Netherlands, during the spring of 2020, SARS-CoV-2 began showing up among farmed mink. Those outbreaks carried large economic consequences as well as public health implications, because the mink were held in crowded conditions, raised in the thousands for their fur, and they proved very capable of transmitting the virus, both from mink to mink and possibly (with human help) from farm to farm.

The first detected cases occurred on two farms in the province of Noord-Brabant, which is in southern Netherlands along the Belgian border. “The minks showed various symptoms including respiratory problems,” according to a statement from the Ministry of Agriculture, Nature, and Food Quality.

Several roads were closed, and a public health agency advised people not to walk or cycle in the vicinity of those farms. But the virus spread quickly, soon affecting 10 farms, then 18 farms, then 25 farms by the middle of July 2020.

The Netherlands contained a lot of mink: roughly 900,000 animals at 130 farms. These were American mink (Neovison vison), like virtually all farmed mink, preferred for the richness of their fur; they belonged to the mustelid family, which includes also the pine marten, the European polecat, and the Eurasian badger.

Dutch exports of mink fur earned about 90 million euros annually in recent years, according to the Dutch Federation of Pelt Farmers. The industry was controversial — fur farming of all sorts is controversial in much of Europe, on grounds of animal welfare — and the Netherlands had already moved toward ending it by 2024. Now that happened more quickly, under government orders to cull all the animals on affected farms, in advance of the usual November doomsday for farmed mink, and not to restock.

By the end of June 2020, almost 600,000 Netherlands mink had been slaughtered. The virus wasn’t innocuous in mink; it caused respiratory symptoms and some mortality, which was what triggered testing and detection of the virus on those first two farms. But it didn’t kill mink as quickly as the culling did.

A team of Dutch scientists investigated the outbreaks, between April and June, and eventually published a paper in Science. The senior author on that study was Marion Koopmans, head of virology at the Erasmus Medical Centre in Rotterdam. “In February, because of the dog infection in Hong Kong,” she told me, “we had a meeting.”

Koopmans, an expert on zoonotic viruses, interacts regularly with the National Public Health Institute, the Veterinary Health Institute, and an independent organization, farmer-supported, called the Animal Health Service. By late spring, everyone was aware that SARS-CoV-2 had appeared not just in one or two Hong Kong dogs but also in domestic cats, tigers, and lions. Among humans, it was raging in Italy, and the Netherlands was suffering its first wave, with almost 40,000 cases by the end of April and a gruesomely high case fatality rate.

“We were ramping up human diagnostics,” Koopmans said — the labs at her center, as well as veterinary labs in the system. Then came a couple of dead mink, submitted for necropsy. “And I said, ‘Hey, well, what the heck. Let’s also test these mink.’” It was done at the same veterinary lab that had jumped in to do human diagnostics. Bingo.

As the mink outbreaks turned up on one farm after another, and the human pandemic intensified, Koopmans and her colleagues found time and resources to study the animal phenomenon, which would have implications for public health as well as for the fur industry.

They sampled both mink and people on 16 farms, finding not just lots of infected mink but also 18 infected people among farm employees and their close contacts. The team sequenced samples and saw that the viral genomes in people generally matched the genomes in that farm’s mink. This and other evidence suggested not just human-to-mink transmission starting each outbreak, and mink-to-mink transmission keeping the outbreaks aflame, but also possibly mink-to-human transmission. That last point was ominous and I’ll return to it.

In mid-June, it was Denmark’s turn. “A herd of mink is being slaughtered at a farm in North Jutland after several of the animals and one employee tested positive for coronavirus,” according to a report in The Local, an English-language online media service. That farm was quarantined, and all 11,000 animals would be killed. The news fell heavily because Denmark, with roughly 14 million mink on more than a thousand farms, produced a large portion of the world’s pelts, and the quality of Danish pelts was considered supreme.

The virus spread quickly that summer. By early October, 41 Danish farms had recorded outbreaks and authorities spoke of culling a million mink. This was optimistic. By mid-October: Sixty-three farms and plans for culling 2.5 million mink. But that too was just a beginning.

gettyimages
Denmark mink farm owners Holger and Ruth Rønnow pose next to shelves of culled minks, none of which tested positive for Covid. However, the country’s government decided to mass cull millions of the animals after a mink-associated variant of the virus spilled back into humans in November 2020. Visual: Ole Jensen/Getty Images

In the meantime, health officials in Spain ordered the culling of 93,000 mink on one farm, after determining that “most of the animals there had been infected with the coronavirus,” according to Reuters. Mink tested positive on a farm in Italy. In Sweden, a veterinary official visited a mink farm on the southern coast, reporting, “We tested a number of animals today and all were positive.”

Mink at two farms in Utah tested positive, and then came some worse news. Veterinary officials from the U.S. Department of Agriculture revealed that a wild, free-ranging mink in Utah had also tested positive. The sequenced virus from that wild mink matched the virus in mink on a farm nearby, so the wild individual had presumably been infected by an escapee — or by schmoozing with captives nose-to-nose through a fence.

This raised a concern well beyond the economics of fur: the prospect of SARS-CoV-2 gone rogue into the American landscape. In the lingo of disease ecologists: a sylvatic cycle.

That term comes from the Latin word sylva, meaning forest. A virus with a sylvatic cycle is two-faced, like a traveling salesman with another wife and more kids in another town. Yellow fever virus, for example: Transmitted by mosquitoes, it infects humans in cities (the urban cycle) when the right mosquitoes are present, but it’s broadly enough adapted to infect monkeys also, and it does that in some tropical forests (the sylvatic cycle), circulating in monkey populations.

Yellow fever can be eliminated in cities by vaccination and mosquito control, but whenever an unvaccinated person goes into a forest where the virus circulates, that person can become infected, return to the city, and trigger another urban cycle, if some mosquitoes are still there to help. Yellow fever virus has never been eradicated, and travelers to many tropical countries are still obliged to be vaccinated, because the sylvatic cycle will persist, and threaten another urban cycle, until you kill every mosquito or vaccinate every monkey.

Deer mice could become a part of the sylvatic cycle for coronavirus — an initial infection from cats could lead to mouse-to-mouse transmission in the wild. Visual: USFWS/Flickr

Now transfer the concept to SARS-CoV-2 and consider: If the world’s forests or other natural ecosystems contain populations of wild animals in which that virus circulates, either because they are the original reservoir hosts (horseshoe bats in southern China?) or because they have become infected by contact with humans (mink in Utah? deer mice in Westchester County?), then there is no end to Covid-19. (There is probably no end to it regardless, but that’s another matter.)

There is no herd immunity where there is a sylvatic cycle. An unvaccinated person has contact with an infected wild animal (a mink, a cougar, a monkey, a deer mouse) during some activity (hunting, cutting timber, picking fruit, sweeping up urine-laced dust in a cabin) and becomes infected with the virus, potentially triggering a new outbreak among people.

You could vaccinate every person on Earth (that’s not gonna happen) and the virus would still be present around us, circulating, replicating, mutating, evolving, generating new variants, ready for its next opportunity.

The chance of a sylvatic cycle in Europe, possibly also derived from mink, is elevated by the fact that many mink escape from farms — a few thousand every year in Denmark alone. Although not native to the European continent, these American mink have established themselves as an invasive population in the wild, their presence reflected in the numbers taken by hunters and trappers.

About 5 percent of the farmed Danish mink that escaped in 2020, by one expert’s estimate, were infected with SARS- CoV-2. Mink tend to be solitary in the wild, but obviously they meet to mate, and as both predators and prey within the food chain, they come in contact with other animals.

Atop the list of other creatures that might be susceptible to a mink-borne virus are their wild mustelid relatives, the pine marten, the European polecat, and the Eurasian badger.

On Nov. 5, 2020, another bit of disquieting news came out of Denmark. The government announced severe restrictions on travel and public gatherings for residents of North Jutland — that low and tapering island curled like a claw toward southwestern Sweden — after discovery that a mink-associated variant of the virus, containing multiple mutations of unknown significance, had spilled back into humans. Twelve people had it.

This variant became known as Cluster 5, because it was fifth in a series of mink variants; but it was the first to be detected in humans. It carried four changed amino acids in the spike protein, raising concern that it might evade vaccine protections when vaccines became available. That’s it, we’re done, said the government statement: all remaining mink would be culled. The mink industry in Denmark was over.

But the rigorous shutdown, the tracing of cases, and the other control measures pinched that variant to a dead end. Within two weeks, a Danish research institute announced that the Cluster 5 lineage seemed to be extinct, at least among humans. Whether it survived in the wild, among escaped mink or their native relatives on the Danish landscape — pine marten, European polecat, Eurasian badger — is another question.

Through the last months of 2020 and well into 2021, reports of SARS- CoV-2 in nonhuman animals continued, sporadic but notable. A tiger at a zoo in Knoxville, Tennessee, tested positive. Four lions of the beleaguered Asiatic population, at a zoo in Singapore, started coughing and sneezing after contact with infected zookeepers.

Two gorillas, also coughing, at the San Diego Zoo Safari Park. The two gorillas recovered within weeks, although not before one animal, a 48-year-old silverback with heart disease named Winston, had been treated with monoclonal antibodies. Winston also got cardiac medication and, as a precaution against secondary infection with bacteria, some antibiotics. If he had been a wild gorilla in an African forest, he might well be dead. Then again, if he had been a wild gorilla, free of zookeepers, he probably wouldn’t have caught this virus.

In October 2021, SARS-CoV-2 reached the Lincoln Children’s Zoo, in Lincoln, Nebraska, infecting two Sumatran tigers and three snow leopards. This zoo proclaims a mission to enrich lives, especially children’s lives, through “firsthand interaction” with wild creatures, under controlled and educational circumstances. It’s a meritorious goal, but as we’ve all learned, close encounters in the time of Covid carry risks. These snow leopards were less lucky than the three in Louisville a year earlier. In November, despite treatment with steroids, and antibiotics against secondary infection, all three died.

Two Sumatran tigers tested positive for Covid in August 2021 at the Ragunan Zoo in Jakarta, Indonesia. They are among many zoo animals infected by SARS-CoV-2, some after contact with infected caretakers. Visual: Dasril Roszandi/NurPhoto via Getty Images

Meanwhile, of course, people were dying too. By Oct. 31, 2021 — the second Halloween of the pandemic — the state of Nebraska had recorded 2,975 Covid fatalities. For the United States on that date, the cumulative toll was 773,976 dead. Throughout the world, SARS-CoV-2 had killed more than 5 million humans. In the small nation of Belgium, with a total population less than 12 million, one person in 10 had been infected with the virus, the curve was rising steeply, and 26,119 people had died.

In December, also in Belgium, two hippopotamuses at the Antwerp Zoo tested positive. They were luckier than the Nebraska snow leopards or the 26,119 dead Belgians, showing no symptoms beyond runny noses (more runny than usual for hippos), but were put into quarantine.

Other news in late 2021 brought the prospect of a sylvatic cycle from possibility to reality. Scientists at Penn State University, working with colleagues at the Iowa Wildlife Bureau and elsewhere, reported evidence of widespread SARS-CoV-2 infection among white-tailed deer in Iowa. Experimental studies had already shown that captive fawns, inoculated with the virus, could transmit it to other deer. This new work went much further, revealing that wild deer had become infected, somehow, from humans — and not just a few deer. SARS-CoV-2 was rampant throughout the Iowa deer population.

That trend began slowly, after the beginning of the pandemic, but by the final months of 2020 it was overwhelming. The team’s trained field staff collected lymph nodes from the throats of almost 300 deer, mostly free-living animals on the Iowa landscape, a lesser portion contained within nature preserves or game preserves — none of them artificially infected by experiment.

The sampled deer had been killed by hunters or in road accidents by vehicles. The field staff dissected out the lymph nodes, in connection with an ongoing surveillance program for another communicable illness, chronic wasting disease. The deer sampled early in the study, during spring and summer 2020, were clean of SARS-CoV-2. (Iowa’s initial wave among humans rose in April.) The first positive animal didn’t turn up until September 28, 2020.

After that, it was like popcorn in a hot pan. Over a seven-week period during hunting season, in late 2020 and early January 2021, the team sampled 97 deer, among whom the positivity rate was 82.5 percent. The research continues, with a second phase of sampling, and if that percentage holds anywhere near steady (confidential updates suggest it will), it’s startling evidence of sylvatic SARS-CoV-2 in Iowa.

Iowa is not alone. A different study, done by federal wildlife officials from APHIS, looked for the virus among white-tailed deer in four other states, using blood serum samples rather than lymph nodes. These samples dated from early 2021.

Illinois’s deer were the most Covid-free, with only a 7 percent rate of infection. If you had announced that statistic alone, at the time, it would have seemed shocking. Seven percent of Illinois deer have Covid? But among whitetails sampled in New York, the rate was 31 percent infected; in Pennsylvania it was 44 percent; in Michigan, it was 67 percent.

The United States presently contains an estimated 25 million white-tailed deer, and no one has informed them that SARS-CoV-2 is uniquely, peculiarly well adapted for infecting humans.

David Quammen has written for The New Yorker, Harper’s Magazine, The Atlantic, National Geographic, and Outside, among other magazines, and is a three-time winner of the National Magazine Award. He is a founding member of Undark’s advisory board. Follow David on Twitter @DavidQuammen

A version of this article appeared originally at Undark and is posted here with permission. Check out Undark on Twitter @undarkmag

Cystic fibrosis chronicle: Why has the often-deadly CF gene not passed out of the human genome? And what new treatments are being developed?

Is cystic fibrosis (CF) a death sentence? It can be for many if it is not treated aggressively and early. It’s the most common fatal disease in many countries, and the most common genetic disease among whites. The genetic disorder effects the lungs and carries a life expectancy of ~46 years. 

The battle to contain the disease raises two provocative questions. Why hasn’t natural selection removed the deadly mutation that triggers the disease from the human genome? Are their treatments in the wings that might offer hope to this deadliest of diseases?

Cystic fibrosis patient Ayden Cochrane lost his battle with CF in 2020. Credit: Cochrane Family

New treatments

In January, Maryland-based drug development company Advanced Phage Therapeutics (APT) dosed its first patients in an early-stage clinical trial for a new. It is the result of a mutation in one gene, the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) gene, which instructs lung cells to produce a CFTR protein that  aids in the transport of water in and out of the cells of the lungs in healthy individuals. 

In CF patients, this protein is defective, resulting in the build-up of sticky mucus in the lungs that blocks airways and traps antigens like bacteria and viruses leading to recurrent infections and severe lung damage. Additionally, this mutation prevents digestive enzymes from reaching the intestines which negatively impacts on digestion.

This new treatment targets one of the main causes of serious complications in CF; the recurrent bacterial infections of the lungs. Over time patients experience chronic levels of damage to their lung tissue and, sadly, this often culminates in total respiratory failure and death. APT hope to address this by turning to a form of treatment called phage therapy. It is based on a 1917 discovery by French Canadian biologist Félix d’Hérelle which uses bacteria-targeting viruses called bacteriophages to destroy harmful bacteria in the body. In this case, the drug targets and destroys bacterial strains that can cause fatal lung damage in CF patients. 

Phages might overcome bacterial resistance. Credit: Genetic Engineering and Biotechnology News

It’s also bit of a double win if successful with phage therapy offering us a potential avenue out of antibiotic reliance for dealing with bacterial infections. As a result, APT have partnered with the Antibacterial Resistance Leadership Group (ARLG) for the study with their CEO stating:

We are proud to be a part of this important trial and look forward to working with the ARLG and the CF community to bring new hope to those affected by this devastating disease and secondary respiratory infection.

Even more promisingly, this development does not stand alone. There has been a flurry of recent developments in the search for new CF treatments with the market for therapies predicted to become a multimillion-dollar industry in the next 5 years.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Gene therapies

The genetic nature of CF has made it a primary target for gene therapy approaches. However, gene therapy is still an infant technology when it comes to the clinical setting. The FDA has approved over 20,000 drugs in its 115-year history and only 5 gene therapies. That’s a ratio of over 4000:1. Doctor Anthony Davies, founder and CEO of founder of a gene therapy development and consultancy firm in California explained why:

There’s a massive pharma-economic problem once a blockbuster cell or gene therapy gets approved, such as for a solid tumor indication or one of the more common genetic diseases. They’re enormously more complex. They’re more expensive to manufacture and more complex to characterize.

Fortunately, the obstacles aren’t large enough to hinder all promising new developments in CF gene therapy. A recent study from the Yale CF Center reported the successful use of nanoparticles loaded with a gene therapy payload to correct CF-related mutations to the CFTR gene in mice. 

“This is the first study to show that with a single intravenous administration of gene editing reagents multiple organs affected by CF can regain partial function of CFTR,” said Marie Egan, MD, director of the Yale CF Center.

The positive effect faded over time, but a treatment regime of repeated doses restored the therapeutic effect. The data represents the earlier stages of testing but is still a development that has the potential to have a huge benefit for CF patients 

Despite advances like this, many still question if gene therapy is the best option; some have turned to other technologies. A partnership between Vertex and Moderna to create an mRNA based CF drug has already yielded a therapy that has cleared clinical trials and gained FDA approval and they aren’t finished there according to Moderna CEO Stéphane Bancel:

Moderna’s development of a proprietary inhalable lipid nanoparticle to deliver a functional cystic fibrosis treatment to the lungs could lead to a transformational medical achievement. We are excited by the progress that has been made with the upcoming advancement of VX-522 to the clinic and look forward to our ongoing collaboration to develop treatments for the underlying cause of cystic fibrosis.

Some children in France with CF recently started receiving an innovative treatment: a more ‘classic drug’ style formulation called Kaftrio, which increases the number of CFTR proteins on the cell surface and improves their activity and function.

So, this scattergun approach of looking at many types of therapy is bearing fruit, welcome news for CF patients across the globe. 

The evolutionary reasons for high CF rates

The mutation that causes cystic fibrosis arose in the early Bronze Age and spread across Europe during ancient migrations. Cystic fibrosis is the most common fatal genetic disease in the US, and yet has genetic characteristics that should hinder it’s spread or remove the mutation from the gene pool altogether. We would expect natural selection to eliminate alleles with negative effects from a population, and yet many populations include individuals carrying such alleles. So why are these deleterious alleles still around? What might keep natural selection from getting rid of them? 

First, up until recent advent of new therapies, the life expectancy of CF meant most patients died before the age of 14; years ago, most incidences weren’t diagnosed until victims were of parenthood age. The longer projected life expectancy means that survivors have more of an opportunity to pass along these killer mutations.

Second, CF is a recessive genetic disorder; this means both parents must carry one copy of the gene for a child to have a chance of being born with the disorder and, even then, the risk is one in 4. In most cases, these two characteristics would have limited the spread of the deadly gene.

However, in the case of CF, they created an evolutionary niche that has allowed the gene to grow in the population to the point where approximately 10 million Americans carry it. This is partly due to the gene’s recessive nature: you can carry one copy without exhibiting any symptoms of the disease. Only a genetic screening will alert you to the presence of the GF gene in your DNA. This has allowed the gene to ‘silently’ propagate though the population.

But still, the numbers of carriers don’t make sense. They are still too high. What else might be in play here?

The answer may lie in an unexpected place: the pandemics of respiratory illnesses that plagued generations in centuries gone by. It is proposed that carrying one copy of the gene for CF actually conferred an evolutionary benefit to prevent people from dying of tuberculosis and cholera. In other words, the negative effects of the genes involved were counterbalanced by their positive evolutionary contributions.

During the cholera epidemics of the 19th century that killed millions the primary cause of death was dehydration. A study in the early 1990s demonstrated that mice carrying one copy of the CF gene did not experience the same extent of diarrhea and dehydration and generally did not die when infected with cholera whereas ‘normal’ mice did. A more recent study suggested that CF patients themselves also had a higher resistance to cholera and, in a weird symbiosis, may have found symptomatic relief from CF if infected. 

Death on the Trail, from the diary of Virginia Reed, a member of the Donner Part. Credit: National Park Service

It’s all to do with the way the CFTR is involved in water transport in and out of cells. If you carried the gene for CF, your cells would hold onto water meaning you were less likely to succumb to dehydration and if you had CF, the thick mucus would prevent bacterial invasion creating much milder symptoms. In short, CF carriers and patients were more likely to survive — they had an evolutionary advantage.

The same also appears to be true for tuberculosis, which spreads by bacteria invading the lungs and creating an infection. And how do those bacteria get in? Through the very same CFTR channel that is dysfunctional in CF patients and partially dysfunctional in carriers. As a result, CF patients and carriers were once again protected against what was then a lethal disease that plagued communities across the globe. 

It’s a fascinating theory, the tale of a mutation giving carriers a selective advantage against diseases that caused devastation to global populations. Carriers were more likely to survive and thus possessed an increased chance of passing the mutation down to the next generation. This hypothesis is now regarded by many as the best explanation for why such a lethal genetic disease became so common.

Sam Moxon has a PhD in tissue engineering and is currently a research fellow in the field of regenerative medicine. He is a freelance writer with an interest in the development of new technologies to enhance medical therapies. Follow him on Twitter @DrSamMoxon

The $100 genome: What breaking this accessibility barrier means for the future of genetic testing

breaking the barrier to future genetic testing
In May 2022, Californian biotech Ultima Genomics announced that its UG 100 platform was capable of sequencing an entire human genome for just $100, a landmark moment in the history of the field. The announcement was particularly remarkable because few had previously heard of the company, a relative unknown in an industry long dominated by global giant Illumina which controls about 80 percent of the world’s sequencing market.

Ultima’s secret was to completely revamp many technical aspects of the way Illumina have traditionally deciphered DNA. The process usually involves first splitting the double helix DNA structure into single strands, then breaking these strands into short fragments which are laid out on a glass surface called a flow cell. When this flow cell is loaded into the sequencing machine, color-coded tags are attached to each individual base letter. A laser scans the bases individually while a camera simultaneously records the color associated with them, a process which is repeated until every single fragment has been sequenced.

Instead, Ultima has found a series of shortcuts to slash the cost and boost efficiency. “Ultima Genomics has developed a fundamentally new sequencing architecture designed to scale beyond conventional approaches,” says Josh Lauer, Ultima’s chief commercial officer.

gilad scaled
Gilad Almogy, CEO, Ultima Genomics

This ‘new architecture’ is a series of subtle but highly impactful tweaks to the sequencing process ranging from replacing the costly flow cell with a silicon wafer which is both cheaper and allows more DNA to be read at once, to utilizing machine learning to convert optical data into usable information.

To put $100 genome in perspective, back in 2012 the cost of sequencing a single genome was around $10,000, a price tag which dropped to $1,000 a few years later. Before Ultima’s announcement, the cost of sequencing an individual genome was around $600.

Several studies have found that nearly 12 percent of healthy people who have their genome sequenced, then discover they have a variant pointing to a heightened risk of developing a disease that can be monitored, treated or prevented.

While Ultima’s new machine is not widely available yet, Illumina’s response has been rapid. Last month the company unveiled the NovaSeq X series, which it describes as its fastest most cost-efficient sequencing platform yet, capable of sequencing genomes at $200, with further price cuts likely to follow.

But what will the rapidly tumbling cost of sequencing actually mean for medicine? “Well to start with, obviously it’s going to mean more people getting their genome sequenced,” says Michael Snyder, professor of genetics at Stanford University. “It’ll be a lot more accessible to people.”

At the moment sequencing is mainly limited to certain cancer patients where it is used to inform treatment options, and individuals with undiagnosed illnesses. In the past, initiatives such as SeqFirst have attempted further widen access to genome sequencing based on growing amounts of research illustrating the potential benefits of the technology in healthcare. Several studies have found that nearly 12 percent of healthy people who have their genome sequenced, then discover they have a variant pointing to a heightened risk of developing a disease that can be monitored, treated or prevented.

“While whole genome sequencing is not yet widely used in the U.S., it has started to come into pediatric critical care settings such as newborn intensive care units,” says Professor Michael Bamshad, who heads the genetic medicine division in the University of Washington’s pediatrics department. “It is also being used more often in outpatient clinical genetics services, particularly when conventional testing fails to identify explanatory variants.”

But the cost of sequencing itself is only one part of the price tag. The subsequent clinical interpretation and genetic counselling services often come to several thousand dollars, a cost which insurers are not always willing to pay.

As a result, while Bamshad and others hope that the arrival of the $100 genome will create new opportunities to use genetic testing in innovative ways, the most immediate benefits are likely to come in the realm of research.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Bigger data

There are numerous ways in which cheaper sequencing is likely to advance scientific research, for example the ability to collect data on much larger patient groups. This will be a major boon to scientists working on complex heterogeneous diseases such as schizophrenia or depression where there are many genes involved which all exert subtle effects, as well as substantial variance across the patient population. Bigger studies could help scientists identify subgroups of patients where the disease appears to be driven by similar gene variants, who can then be more precisely targeted with specific drugs.

If insurers can figure out the economics, Snyder even foresees a future where at a certain age, all of us can qualify for annual sequencing of our blood cells to search for early signs of cancer or the potential onset of other diseases like type 2 diabetes.

David Curtis, a genetics professor at University College London, says that scientists studying these illnesses have previously been forced to rely on genome-wide association studies which are limited because they only identify common gene variants. “We might see a significant increase in the number of large association studies using sequence data,” he says. “It would be far preferable to use this because it provides information about rare, potentially functional variants.”

Credit: WHI

Cheaper sequencing will also aid researchers working on diseases which have traditionally been underfunded. Bamshad cites cystic fibrosis, a condition which affects around 40,000 children and adults in the U.S., as one particularly pertinent example.

“Funds for gene discovery for rare diseases are very limited,” he says. “We’re one of three sites that did whole genome sequencing on 5,500 people with cystic fibrosis, but our statistical power is limited. A $100 genome would make it much more feasible to sequence everyone in the U.S. with cystic fibrosis and make it more likely that we discover novel risk factors and pathways influencing clinical outcomes.”

For progressive diseases that are more common like cancer and type 2 diabetes, as well as neurodegenerative conditions like multiple sclerosis and ALS, geneticists will be able to go even further and afford to sequence individual tumor cells or neurons at different time points. This will enable them to analyze how individual DNA modifications like methylation, change as the disease develops.

In the case of cancer, this could help scientists understand how tumors evolve to evade treatments. Within in a clinical setting, the ability to sequence not just one, but many different cells across a patient’s tumor could point to the combination of treatments which offer the best chance of eradicating the entire cancer.

“What happens at the moment with a solid tumor is you treat with one drug, and maybe 80 percent of that tumor is susceptible to that drug,” says Neil Ward, vice president and general manager in the EMEA region for genomics company PacBio. “But the other 20 percent of the tumor has already got mutations that make it resistant, which is probably why a lot of modern therapies extend life for sadly only a matter of months rather than curing, because they treat a big percentage of the tumor, but not the whole thing. So going forwards, I think that we will see genomics play a huge role in cancer treatments, through using multiple modalities to treat someone’s cancer.”

If insurers can figure out the economics, Snyder even foresees a future where at a certain age, all of us can qualify for annual sequencing of our blood cells to search for early signs of cancer or the potential onset of other diseases like type 2 diabetes.

“There are companies already working on looking for cancer signatures in methylated DNA,” he says. “If it was determined that you had early stage cancer, pre-symptomatically, that could then be validated with targeted MRI, followed by surgery or chemotherapy. It makes a big difference catching cancer early. If there were signs of type 2 diabetes, you could start taking steps to mitigate your glucose rise, and possibly prevent it or at least delay the onset.”

This would already revolutionize the way we seek to prevent a whole range of illnesses, but others feel that the $100 genome could also usher in even more powerful and controversial preventative medicine schemes.

Newborn screening

In the eyes of Kári Stefánsson, the Icelandic neurologist who been a visionary for so many advances in the field of human genetics over the last 25 years, the falling cost of sequencing means it will be feasible to sequence the genomes of every baby born.

“We have recently done an analysis of genomes in Iceland and the UK Biobank, and in 4 percent of people you find mutations that lead to serious disease, that can be prevented or dealt with,” says Stefansson, CEO of deCODE genetics, a subsidiary of the pharmaceutical company Amgen. “This could transform our healthcare systems.”

As well as identifying newborns with rare diseases, this kind of genomic information could be used to compute a person’s risk score for developing chronic illnesses later in life. If for example, they have a higher than average risk of colon or breast cancer, they could be pre-emptively scheduled for annual colonoscopies or mammograms as soon as they hit adulthood.

Credit: Medpage Today

To a limited extent, this is already happening. In the UK, Genomics England has launched the Newborn Genomes Programme, which plans to undertake whole-genome sequencing of up to 200,000 newborn babies, with the aim of enabling the early identification of rare genetic diseases.

“I have not had my own genome sequenced and I would not have wanted my parents to have agreed to this,” Curtis says. “I don’t see that sequencing children for the sake of some vague, ill-defined benefits could ever be justifiable.”

However, some scientists feel that it is tricky to justify sequencing the genomes of apparently healthy babies, given the data privacy issues involved. They point out that we still know too little about the links which can be drawn between genetic information at birth, and risk of chronic illness later in life.

“I think there are very difficult ethical issues involved in sequencing children if there are no clear and immediate clinical benefits,” says Curtis. “They cannot consent to this process. I have not had my own genome sequenced and I would not have wanted my parents to have agreed to this. I don’t see that sequencing children for the sake of some vague, ill-defined benefits could ever be justifiable.”

Curtis points out that there are many inherent risks about this data being available. It may fall into the hands of insurance companies, and it could even be used by governments for surveillance purposes.

“Genetic sequence data is very useful indeed for forensic purposes. Its full potential has yet to be realized but identifying rare variants could provide a quick and easy way to find relatives of a perpetrator,” he says. “If large numbers of people had been sequenced in a healthcare system then it could be difficult for a future government to resist the temptation to use this as a resource to investigate serious crimes.”

While sequencing becoming more widely available will present difficult ethical and moral challenges, it will offer many benefits for society as a whole. Cheaper sequencing will help boost the diversity of genomic datasets which have traditionally been skewed towards individuals of white, European descent, meaning that much of the actionable medical information which has come out of these studies is not relevant to people of other ethnicities.

Ward predicts that in the coming years, the growing amount of genetic information will ultimately change the outcomes for many with rare, previously incurable illnesses.

“If you’re the parent of a child that has a susceptible or a suspected rare genetic disease, their genome will get sequenced, and while sadly that doesn’t always lead to treatments, it’s building up a knowledge base so companies can spring up and target that niche of a disease,” he says. “As a result there’s a whole tidal wave of new therapies that are going to come to market over the next five years, as the genetic tools we have, mature and evolve.”

David Cox is a science and health writer based in the UK. He has a PhD in neuroscience from the University of Cambridge and has written for newspapers and broadcasters worldwide including BBC News, New York Times, and The Guardian. You can follow him on Twitter @DrDavidACox

A version of this article appeared originally at Leaps and is posted here with permission. Follow Leaps at Twitter @leaps_org

Podcast: ‘Disinformation feedback loop’ — GLP’s Jon Entine and geneticist Kevin Folta expose web of anti-biotech groups — and their anti-vaxx, cult-promoting funding sources

id illusorytrutheffectemmaciemmaco x
The Genetic Literacy Project is a popular and respected science organization that promotes innovation and research using the cutting-edge tools of biotechnology in sustainable agriculture and biomedicine, including vaccine development. It’s flagship website is geneticliteracyproject.org. The GLP presents diverse viewpoints with original articles and news aggregated from the internet.

Over the last decade, a small but fierce group of biotechnology rejectionists have accused the GLP of being a “front” for the biotechnology industry, even though the evidence rebutting that claim is abundant. In contrast, the nonprofit has an exemplary record of transparency and disclosure. Such accusations are levied by websites that reject biotechnology. Among their targets: the GLP, Jon Entine and University of Florida plant scientist Kevin Folta.

A recent exposé in the Genetic Literacy Project analyzed the organizations and their funding, and dissected the accusations against the GLP (which echo ones made previously against Dr. Folta). It turns out that the fiercest critics of the GLP have direct and intricate links and connections to less-than-credible extremist factions in the pro-organic farming movement.

As Jon and Kevin discuss, we now have the bizarre situation where the leading opponents of biotechnology are an amalgam of science-denying crackpots (Organic Consumers Association, Joe Mercola, Robert F. Kennedy, Jr.), ambulance-chasing cultists (Baum Hedlund law firm found by cult members from the Church of Scientology) and conspiracy-embracing fringe activists and ideologues (SourceWatch, USRTK, Carey Gillam, Paul Thacker). Yet, bizarrely, some news organizations and other ‘progressive groups’ and universities not only treat claims by these clown car ideologues as credible, they uncritically disseminate their views and often promote them.

It’s predictable, if disheartening, that the far left and far right are now in sync on some science issues. United by their righteous zealotry and suspicion of biotech-based medicine and agriculture, anti-GMO leftists are ideological bed fellows with Trumpists.

The take home message is that the disinformation these groups present is echoed by the other related “disinformation feedback loops.” Multiple presentations in the media that appear to be independent actually part of a connected and intricate scheme to tarnish actual scientific information, the scientists that produce it, and the outlets that present it.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

 

Jon Entine is the founder and executive director of the Genetic Literacy Project, author of 7 books and winner of 19 major journalism awards, including two Emmys. Twitter: @JonEntine

Kevin M. Folta is a professor in the Horticultural Sciences Department at the University of Florida. Twitter: @kevinfolta

A version of this article was originally posted at Talking Biotech and is reposted here with permission. Find Talking Biotech on Twitter @talkingbiotech

The Talking Biotech podcast, produced by Kevin Folta, is available for listening or subscription:

Apple Podcasts | Android | Email | Google Podcasts | Stitcher | RSS | Player FM | Pod Directory | TuneIn

This article previously appeared on the GLP Jun 29, 2021.

GLP Facts and Fallacies Podcast and Video: ‘Industrial’ farming unsustainable? Junk science and academic freedom; Oxalate, the new dietary bogeyman

Are our current farming practices unsustainable? If so, how do we make them sustainable? Academic freedom enables researchers to pursue their work unencumbered by outside influence, but it’s also abused by activist academics who promote unscientific ideas. How do we protect well-meaning scientists without allowing fringe voices to promote nonsense? Alternative health proponents have a new villain in their sights: a compound naturally in plants called oxalate. Is it really as bad as they claim? Nope.

Podcast

Video

Join geneticist Kevin Folta and GLP contributor Cameron English on episode 200 of Science Facts and Fallacies as they break down these latest news stories:

In order to feed a growing global population, we need to increase crop yields while reducing inputs and the environmental footprint of agriculture, says risk assessment expert Dr. David Zaruk. The Brussels-based researcher has advanced a 10-point plan that he says will help policymakers avoid “an increasing number of famines, food insecurity, migration and social strife” that could result if leaders in the European Union and elsewhere continue to promote large-scale organic farming, which is embraced for the sake of virtue signaling, not science. One important question remains: will the plan actually work?

Here’s the dilemma: Academic freedom ensures that scientists can pursue their research wherever the data leads. Unfortunately, this concept has been abused by activists with university appointments, who use their institution’s credibility to promote outright harmful ideas. Paradoxically, some schools have refused to defend researchers who do good work while allowing fringe voices in the academy to vocally deny the benefits of vaccination and proclaim that innocuous pesticides cause autism, among other scientifically dubious assertions. Is there a way to preserve academic freedom and prevent activist academics from spreading nonsense?

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Gluten, carbs, sugar, sucralose, fat, salt, dairy and so many more. The list of foods and nutrients in them that supposedly do us harm seems endless. Now alternative health gurus such as Joe Mercola want to add another dietary bogeyman into the mix: oxalate. A naturally occurring compound in plants, oxalate plays an important role in the formation of kidney stones. Patients prone to kidney stones are sometimes encouraged to avoid oxalate-rich foods as a result; however, there is very little evidence linking high-oxalate foods to the litany of health conditions it’s now blamed for causing.

Listen to the podcast here: https://tinyurl.com/33kj5wvh

Kevin M. Folta is a professor, keynote speaker and podcast host. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Visit his website and follow ACSH on Twitter @ACSHorg

Part II: Nature is complex — Rewilding offers promising ecological benefits, but it is not the panacea its proponents contend — and can cause harm

image

The rewilding movement, despite the optimistic hopes of the movement, poses complications. Note the cover featured picture. The 2018 article in UK’s The Guardian reported:

A scheme to rewild marshland east of Amsterdam has been savaged by an official report and sparked public protest after deer, horses and cattle died over the winter.

In a blow to the rewilding vision of renowned ecologists, a special committee has criticised the authorities for allowing populations of large herbivores to rise unchecked at Oostvaardersplassen, causing trees to die and wild bird populations to decline.

Nature can be unpredictable, often foiling the best of intentions. And rewilding experiments gone awry are only a fraction of the controversial issues raised by this movement. According to skeptics, it is chipping away at rural living and the food production in rural areas that many countries rely upon. As rural agricultural areas increasingly succumb to suburbia, the loss of natural habitats is often in conflict with the preservation of farming and ranching lands and the activities upon them is equally, if not more important.

This is the second part of a two-part series. Read part one here.

There is an essential fuzziness to the very notion of rewilding. Are the bears, wolves and wild boar lauded by Italy’s present-day conservationists more or less natural than the horses, giant deer, elephants and rhinoceroses (not to mention Neanderthals) of San Felice Circeo c. 50,000 years ago?  What species represent a truly natural, rewilded Italy? 

If ecological arguments can be presented for deer and boar, say, couldn’t similar justifications be used for the reintroduction of elephants and rhinoceroses? And if wolves and bears are accepted as part of Italy’s natural environment, why not other original large carnivores like hyenas?  (Proponents of deextinction, meanwhile, even advocate using genetic technology to resurrect extinct species such as aurochs and mammoths as a means to recreate lost ecosystems).

This then raises yet another issue, of the interaction between human beings and potentially dangerous wild animals. As the “Wildlife Comeback in Europe” report acknowledges, “Living alongside these species can bring tensions and conflict, particularly among those who perceive or experience an elevated risk to their personal safety”. 

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

This is not just about (often overblown) fears of “lupi di notte” or wolf attacks on humans. Italian farmers already complain about the threat of swelling deer populations to their livelihoods, both through eating crops and by attracting wolves that they claim would soon turn to easier domesticated prey. Certainly, reported attacks on livestock have increased in line with wolf numbers (a continent-wide problem that has recently even impacted the EU President, as a wild wolf mauled the EU president’s prized pony in Janaury. 

Italy’s exploding wild boar population, meanwhile, has resulted in nearly 2.5 million of these animals now roaming the countryside — and, increasingly, the towns. Over 20,000 wild boar are thought to reside in Rome alone, with “multiple cases of porcine aggression toward people” recorded. This being the case, the standard “nature knows best” defence of rewilding looks increasingly simplistic.

Credit: Wanted in Rome

The cultural and economic costs of rewilding

Let’s return to San Felice Circeo. Snobbery notwithstanding, the modern town’s luxury yachts, villas and mass market tourism exemplify the crass consumerism often seen as the root of the modern world’s most pressing problems, from widening inequality to resource depletion to the climate crisis. Indeed, Italy itself is often seen as synonymous with other symbols of materialism: food, fashion and fast cars. And, of course, the vacuousness of consumer capitalism is also a key contrast drawn in many conservationists’ rose-tinted portrayal of nature.

Just inland from San Felice Circeo, however, another facet of modern Italian life can also be discerned. From the medieval walls of hilltop villages such as Maenza or Roccasecca dei Volsci, the island-like outline of Monte Circeo dominates the western horizon (“dei Volsci” recalls the Volsci people who inhabited the region in pre-Roman times.) Yet these ancient villages’ maze-like lanes and alleys are not dominated by the same gaudy consumerist status symbols — SUVs and the like — as San Felice Circeo. Instead, these old towns, like so many throughout the country, are slowly dying, with their centuries-old olive groves largely abandoned and “In vendita” (“For sale”) signs plastering the decaying houses. 

Credit: Author. Roccasecca dei Volsci with Monte Cicero in the distance

Thus, just as the wolf is the poster child of resurgent nature, San Felice Circeo’s hinterland is the sad face of rewilding’s reluctant twin — rural depopulation. A way of life stretching back beyond Roman times is quietly fading. And little wonder. These quaint old villages are not equipped for modern living; there are no jobs for the young and the steep narrow streets are no good for the old. 

Of course, it’s as easy to sentimentalize rustic village life — something that bargain home-hunters, drawn by Italian village house prices as low €1, may find to their personal if not financial cost — as it is to over-romaticize nature. With rewilding especially, the emotional appeal is particularly strong: as one typical account puts it,

For many conservationists … rewilding is as much an activity of the heart as of the land.

Yet there are economic consequences in the ‘ return of nature’ movement. Opposition to rewilding is strongest in rural agricultural communities. Although many farmers support the rewilding movement in limited turns, it can pose real dangers — not only to their livelihoods but to global food security. Many farmers believe that we cannot afford to sacrifice food production for what they believe is a largely romantic movement,  especially when the world’s need for food demand is anticipated by the United Nations to increase by 100% by 2050. Many farmers globally have voiced their concerns.  In North America, for example, the rewilding efforts have led to a surge in apex predator populations with increasing attacks on livestock and humans as well as devastation to wild herd animals like elk and deer.

Credit: Bobby-Jo Photography via University of New South Wales

Another unanticipated consequence is the mixed benefits of increased ecotourism. Newly emerging wilderness areas create a lucrative market for eco-tourism. In Italy, for instance, rewilding advocates claim that such nature tourism also helps “people, previously struggling to be able to remain in their villages …, [to find] new, additional or alternative sources of income from wildlife, wild values and wild nature”. Everyone, according to this claim, is a winner.

If only it was that simple. While ecotourism does have growing potential, it is wishful thinking to believe it could replace laborintensive traditional rural economies or slow or reverse plummeting population trends. Nature tourism will only be a niche market, and will mostly benefit only those wealthy enough to afford it. 

Tellingly, similar optimism about a tourist boom surfaced in San Felice Circeo after the discovery of the latest haul of Neanderthal remains. The anticipated influx of scientifically-minded visitors, however, has thus far failed to materialize, with Guattari Grotto fenced off and Hotel Neanderthal now closed and up for sale. The town’s attraction, it seems, is still for those seeking sun, sea and sand, and not science.

Credit: Author. Hotel Neanderthal is up for sale

In addition, while nature tourism sells itself as an eco-friendly alternative to the shallow materialism of mass tourism, it’s firmly part of the same ecologically-questionable holiday industry; eco-tourists still travel using the same airplanes and highways as their plebian counterparts. There’s also the whiff of virtue signaling and social elitism, with affluent travellers serviced by a suitably rustic peasantry (with the former not the latter having the freedom to lead more varied lives elsewhere).

The (human) value of nature 

While the reality may puncture the romantic bubble of rewilding, it says little about whether the rewilding of Europe and the commensurate decline of rural communities are more positive than negative. Here’s where Italy’s long history and prehistory provide a useful perspective. Was the demise of San Felice Circeo’s Neanderthals a good thing or a bad thing; what about the rise and fall of Rome? 

All we can really say is, they happened. From perspective, the only constant is change: societies change, ecosystems change, life changes. While traditionalists and romantic rewilders might believe otherwise, there is no real “natural” order as the Darwinian explanation of life makes clear, nature — of which us humans are an integral part — is in constant adaptive flux.

Unlike San Felice Circeo’s Neanderthals, however, or its Roman and medieval citizens, modern societies are no longer simply the hapless victims of circumstance. We are, if as yet imperfectly, more and more able to predict and control the consequences of our actions and to weather unexpected and hitherto devastating events, such as disease pandemics. True, rural depopulation and Europe’s consequent rewilding is perhaps unstoppable — at least not without costly and likely unsustainable intervention; but this does not mean that we cannot manage this process in ways that bring the most benefit to the most people. Here eco-tourism could play a useful part, inasmuch as it genuinely provides worthwhile and fulfilling employment.

While the full political, social and economic ramifications of rural depopulation/rewilding are beyond the scope of this essay, being pragmatic and realistic is a good place to start. Rewilding may indeed be good for nature (however that is defined) and is likely good for humans, not least the mental health benefits of access to natural environments or in helping to mitigate the long-term negative consequences of climate change. But it also has downsides, especially in its connection to the demise of traditional rural communities and the disruption of family and social lives. 

Nor does nature know best. If humans and wildlife are to co-exist without conflict, nature needs to be managed. Nature also doesn’t care whether wolves and boar or hyenas and Neanderthals roam the Italian peninsula; species extinction is as natural as species expansion. The only being with the sensitivity and intelligence to care is us Homo sapiens.

Being pragmatic and realistic would emphasize the functional or practical value of rewilding (how it benefits us humans) rather than nature’s intrinsic worth in and of itself. This runs counter to broad environmentalist thinking; as some rewilders argue, “rewilding … has grown in reaction against predominant ego, or anthropocentric human values, which place the self or humans above all others. Instead, rewilding promotes ecocentrism … [and] acknowledges, like biocentrism, other species’ intrinsic value”. Yet this is an emotional human claim; to underscore the point made above, amoral nature simply does not care. 

And it is a claim that is difficult to defend in conservationists’ own terms; rewilding makes much of the return of native or pre-existing species, but on what grounds (other than instrumental ones) are these intrinsically more valuable than introduced or invasive species?

Or consider the early humans of San Felice Circeo from nature’s amoral perspective. Neanderthal victims of hyena predation were no more or less valuable than any of the other species butchered and consumed in Grotto Guattari. But this put the lives and deaths of intelligent, conscious human beings as on par with those of horses or deer — an inhumane and misanthropic vision than appears worse than the anthropocentric “humans above all others” alternative. True, this too is an emotional point, but it is based on the fact that humans, through the natural process of evolution, have developed imaginations and desires and fears, and hence a greater capacity for both enjoyment and suffering than any other species.

Pragmatic natural coexistence

Even if we place human needs above those of other species — which would include parasites, disease organisms and “all things sick and cancerous” — an anthropocentric perspective doesn’t mean that nature need ‘suffer’ (not that ‘nature’ can actually suffer; ironically, that’s an anthropocentric idea). Italian eco-tourism and nature tourism in general, despite limitations, are examples of the potential for mutually beneficial human/nature coexistence. 

Eco-modernists take this further, arguing that “that both human prosperity and an ecologically vibrant planet are not only possible, but also inseparable” (albeit that such “an optimistic view toward human capacities and the future” attracts strident criticism from established conservationists). 

A more realistic perspective would indeed ‘re-center’ human beings — the very definition of anthropocentrism. If we cut through rewilding’s romanticism and wishful thinking, the crucial factor is not the intrinsic benefit of wilderness but that flourishing nature can help bring about and enhance flourishing human lives. 

The Neanderthals of Mount Circeo lived (and died) in a state of nature. Imagine if these prehistoric humans were transported through time and space to modern San Felice Cicero. What would they think of modern human existence? Would they swap their lives for ours? Would you swap your life for theirs?

Patrick Whittle has a PhD in philosophy and is a freelance writer with a particular interest in the social and political implications of modern biological science. Follow him on his website patrickmichaelwhittle.com or on Twitter @WhittlePM

Carbon tax on farming to reduce carbon emissions? New Zealand is pioneering this new policy. Here’s why its touted benefits may not be a sure thing

time for a drink geograph org uk

Agriculture contributes an estimated one-quarter of global greenhouse gas emissions. The economy in New Zealand, a tiny country at the bottom of the world, relies on its vast natural resources such as fishing, forestry and mining, and most critically agriculture. More than 80% of money coming into the country is from these primary sector exports. So why is it initiating the world’s first agricultural farming carbon tax, which some say will damage its critical farming sector?

New Zealand has announced it will become the first country globally to put a price on agricultural greenhouse gas emissions. Although the actual price has yet to be determined, effective January 1, 2025 it will begin levying (farmers see it as a tax) the producers of food – farmers.

Is it good policy? Will it set a standard for the rest of the world to emulate? Are farmers being unfairly targeted? What are potential benefits? 

Human activities are responsible for almost all of the increase in greenhouse gases in the atmosphere over the last 150 years. Agriculture is estimated to be among the biggest contributors, at 24%, just behind energy used for electricity and heat. 

image

New Zealand’s GHG profile is unusual in that half of its emissions come from agriculture. Over 80% of energy generation is already from renewable sources, and the country is heavily reliant on road transport (and hence fossil fuels) for freight. A decrease in agricultural emissions is considered vital for meeting international agreements. Tax proponents maintain that GHG emission reductions spurred by the tax could help make the country carbon neutral by 2050. 

But the tax will have considerable implications for farmers and potentially for the public. Farming is the country’s biggest industry, generating 12% of the gross domestic product. Many economists believe that taxing food emissions will reduce farm income, and impose restrictions around innovation and choice. Taxation might also reduce food production, with implications for the economy and possibly for food security.

Challenging that belief, the government maintains the tax will help the farm economy. Prime Minister Jacinda Ardern said the farm levy would be invested back into the industry to fund new technology, research and incentive payments for farmers, giving them a “competitive advantage … in a world increasingly discerning about the provenance of their food.”

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Farmer frustration rife

Farmers here are not happy about the tax. The increased regulations are viewed as nonsensical by many farmers, particularly in the animal sector. They are being asked to reduce emissions, they note, when research has shown that New Zealand farmers lead the world as efficient producers of animal protein from meat and dairy products on a per kg of product basis. A 2021 study found that New Zealand dairy has the lowest carbon footprint of 18 countries examined, including the US, Denmark and the Netherlands.

image

A life cycle assessment study of meat released in 2022 found that Kiwi on-farm footprint was about half the average of the other countries. Certainly other countries are catching up, and certainly nobody is resting on any laurels in the effort to reduce impact on the environment, but it is difficult to do much better when you are already at the top, farmers say.

Farm-level greenhouse gas reduction system: Here’s how it will work

Prime Minster Ardern claims that farmers support this tax. When the program was formalized in December, she said: “After listening to farmers and growers through our recent consultation, and engaging over recent months with industry leaders, today we have taken the next steps in establishing a proposed farm-level emissions reduction system as an alternative to the Emissions Trading Scheme (ETS – New Zealand’s carbon trading system designed to encourage businesses to clean up their act; businesses that participate in ETS can buy and sell units from each other, and must surrender one NZ Unit for each ton of carbon dioxide equivalent (CO2-e) emissions they produce.

The plan calls for the introduction of an emissions levy in 2025 at the ‘lowest price possible’ to achieve the outcome of a 10% reduction in methane emissions compared with 2017 by 2030. Once the price is decided, a 5-year price pathway will be established, providing certainty to 2030. Money raised will be recycled (minus administration costs) into incentivising good practice, and the agriculture sector will help to oversee the allocation of funds. In addition, the government appears to have agreed to take social, cultural and economic impacts into account when setting the prices.

How fair and efficient is the carbon tax?

Prices will be set by the Minister of Agriculture Damien O’Connor and Minister for Climate Change James Shaw, after advice from the Commission for Climate Change. Although the government has indicated that an Oversight Board, which would be skills-based and include Māori membership, will have a role in providing advice on the price, it is not clear how appointments to the Oversight Board will occur and how much weight will be given to its advice. Some farming leaders are skeptical about what might happen this year.

The current proposal is less punitive than one suggested in October, but farmers remain dubious and are already calculating the cost and considering their options. They note that money taken in a GHG levy (aka tax) won’t be available for investing in developing their businesses, such as paying down debt, or investing in environmental technologies, or in environmental plantings. 

Another key issue: Will farmers be better off if they sell out to foresters? Forestry for carbon credits has resulted in a three-fold rise in the price of land above that for meat production. The prospect of the GHG levy may push already stressed farmers further to the edge financially. For some, it will make sense to sell their farms. 

Although the agricultural emissions plan is being blamed as the last straw, in reality it is the settings on the Emissions Trading Scheme that are the problem – no other country is so generous with offsetting. Every tonne of carbon dioxide equivalents emitted by a fuel intensive company can be ‘offset’ by buying trees – hence claims of carbon neutrality. 

The ramifications for rural communities across New Zealand are considerable. The generous still-in-place ETS system has resulted in a rapid increase in land (mostly beef and sheep farms) purchased by forestry interests — from 7,000 hectares in 2017 to 52,000 in 2021. These land sales could result in a decrease of one million stock units and the loss of A$245 million annually in export receipts. Rural depopulation, with loss of schools, medical practitioners, shops and support services, is feared.

image

Activist reaction

Concerns from farmers about survivability are matched by concerns from environmentalists — but they contend that the measures don’t go far enough, especially when it comes to cutting dairy emissions. Greenpeace’s lead agricultural campaigner Christine Rose has stated:

Action to reduce agricultural emissions means tackling the dairy industry – New Zealand’s worst climate polluter – and that means far fewer cows, it means cutting synthetic nitrogen fertiliser, and it means backing a shift to more plant-based regenerative organic farming.

In a radio interview, Rose rejected arguments about food security, claiming regenerative, organic and plant-based farming were a more efficient economic use of land for food production.

But that makes no sense. If it was, farmers would have made the change themselves. But it isn’t, so they haven’t. The farmer’s contentions are supported by research over decades which has put them in the position of being low impact producers. Farmers are left trying to fact-check campaign claims from the likes of Greenpeace. They believe they are not the problem but part of the solution, which includes continuing to produce food efficiently and supporting New Zealand’s economic viability.

“We’re not trying to play a get-out-of-jail-free card,” said Andrew Morrison, chair of Beef+Lamb NZ, one of the 13 partner groups involved in He Waka Eke Noa (HWEN), a partnership of farmers, industry groups, and Māori. “[W]e’re not trying to not take responsibility for our emissions, by the same token we want to faithfully be recognised for the behaviours on farm — like mitigations, inhibitors used, sequestrations etc.”

What does current research show?

Research programs already show that attempts to reduce emissions by reducing animal numbers and cutting synthetic nitrogen have unintended consequences.

On the Canterbury Plains, on the East Coast of the South Island, the results of a comparison between conventional farming with 3.7 cows per hectare vs. regenerative farming with 3.2 cows per hectare are publicly available. Fertilizer inputs were reduced on the regenerative farm, which in part resulted in a 22% reduction in milk production and 24% earnings drop before interest and tax. There were no differences reported in emissions per kg of milk solids (each litre of milk is approximately 13% solids), although total emissions were reduced on the regeneratively-managed farm.

Documenting similar findings in the North Island beyond Auckland, The Northland Dairy Development Trust released results of its trials investigating low emissions. Changing from a ‘current’ farm (3.0 cows per hectare) to a low-emissions farm (2.1 cows per hectare and no nitrogen or imported feed) decreased milk production by 38% in the first year. The decrease was associated with a decrease in greenhouse gases per hectare (33% reduction in methane and 47% reduction in nitrous oxides) but per kilogram of milk solids the difference was only 11%.

Further, the low-emissions farm’s operating profit dropped 40%. Modeling indicated that only at NZ$5.00/kg MS, down from the current average milk price of NZ$9.30 MS, would the low emissions and current farming operations be equal – and both would be losing money.

The proposition is not better for meat production. Results from a recent study showed regenerative farms produced 38% less meat and wool than conventional farms and earnings decreased 55%. Further, each kg of product was associated with 24% more GHG from the regenerative farms in comparison with the conventional farms.  

These results explain why farmers are feeling perplexed – they already run efficient systems suited to soil and climate; tinkering with the system as desired by Greenpeace does not result in the claimed benefits. And if New Zealand reduces food production and other countries associated with higher GHG pick up the slack (emissions leakage) the world will not be better off.

New Zealand as a case study?

The government’s agreement to take more than GHG emissions into account by including social, cultural and economic impacts could be a model for other countries. However, factors such as the small population (5 million people in a country the size of the UK, which has 69 million), dependence on the primary sector for export revenue (over 80% when fisheries and forestry are included with horticulture and agriculture) and lack of agricultural subsidies (which has resulted in an innovative and productive primary sector that follows market signals) mean that applicability to other countries is limited.

The reverse is also true. Regenerative, organic and plant-based farming might be a more productive and efficient use of land for food production in some areas of some countries, but in New Zealand, where animals graze on pasture on land unsuitable for crops in an efficient manner enabled by decades of research, it isn’t. Reducing the number of animals results in less efficient production, which reduces income as well as food, and the emissions associated with that food. This is a lose:lose:lose outcome.

New Zealand North Island beef and sheep grazing… Credit: Ravensdown

What’s next?

Advances in research focused on New Zealand pasture-based systems are already being rolled out. Low-methane sheep, for instance, will be introduced in flocks before 2030, and low-methane bulls have been identified in research, with the ability to pass the trait to progeny. Methane-reducing boluses (a slow acting release mechanism that the cow swallows, used successfully for micro-nutrients to overcome deficiencies where required) are being developed and research on vaccines continues.

Meanwhile, farmers are being encouraged to continue making their production ever-more efficient, as they are fully aware that their customers, such as Nestle, Danone, McDonalds and Unilever, expect New Zealand producers to assist them in meeting their emission reduction goals.

The penultimate word from the Prime Minister: 

Our shared goal is supporting farmers to grow their exports, reduce emissions, and maintain our agricultural sectors international competitive edge into the future. By continuing to work through our different positions together, we move closer to achieving long term consensus on a plan that works.   

But the question remains – will the shared goal enable good policy?

Dr Jacqueline Rowarth, Adjunct Professor Lincoln University, is a farmer-elected director of DairyNZ and Ravensdown, and a producer-appointed director of Deer Industry NZ. She thanks colleagues, including the editor, for critiques; the resulting analysis and conclusions are her own. [email protected]

Jon Entine is the founding executive director of the Genetic Literacy Project, and winner of 19 major journalism awards. He has written extensively in the popular and academic press on media ethics, corporate social responsibility, sustainability, and agricultural and population genetics. You can follow him on Twitter @JonEntine

‘Like turning a golf ball into string’: Making meat substitutes is not easy

If you’re an environmentally aware meat-eater, you probably carry at least a little guilt to the dinner table. The meat on our plates comes at a significant environmental cost through deforestation, greenhouse gas emissions, and air and water pollution — an uncomfortable reality, given the world’s urgent need to deal with climate change.

That’s a big reason there’s such a buzz today around a newcomer to supermarket shelves and burger-joint menus: products that look like real meat but are made entirely without animal ingredients. Unlike the bean- or grain-based veggie burgers of past decades, these “plant-based meats,” the best known of which are Impossible Burger and Beyond Meat, are marketed heavily toward traditional meat-eaters. They claim to replicate the taste and texture of real ground meat at a fraction of the environmental cost.

If these newfangled meat alternatives can fill a large part of our demand for meat — and if they’re as green as they claim, which is not easy to verify independently — they might offer carnivores a way to reduce the environmental impact of their dining choices without giving up their favorite recipes.

That could be a game-changer, some think. “People have been educated a long time on the harms of animal agriculture, yet the percentage of vegans and vegetarians generally remains low,” says Elliot Swartz, a scientist with the Good Food Institute, an international nonprofit organization that supports the development of alternatives to meat. “Rather than forcing people to make behavior changes, we think it will be more effective to substitute products into their diets where they don’t have to make a behavior switch.”

There’s no question that today’s meat industry is bad for the planet. Livestock account for about 15 percent of global greenhouse gas emissions both directly (from methane burped out by cattle and other grazing animals and released by manure from feedlots and pig and chicken barns) and indirectly (largely from fossil fuels used to grow feed crops). Indeed, if the globe’s cattle were a country, their greenhouse gas emissions alone would rank second in the world, trailing only China.

Worse yet, the United Nations projects that global demand for meat will swell by 15 percent by 2031 as the world’s increasing — and increasingly affluent — population seeks more meat on their plates. That means more methane emissions and expansion of pastureland and cropland into formerly forested areas such as the Amazon — deforestation that threatens biodiversity and contributes further to emissions.

Not all kinds of meat animals contribute equally to the problem, however. Grazing animals such as cattle, sheep and goats have a far larger greenhouse gas footprint than non-grazers such as pigs and chickens. In large part that’s because only the former burp methane, which happens as gut microbes digest the cellulose in grasses and other forage.

Pigs and chickens are also much more efficient at converting feed into edible flesh: Chickens need less than two pounds of feed, and pigs need roughly three to five pounds, to put on a pound of body weight. (The rest goes to the energy costs of daily life: circulating blood, moving around, keeping warm, fighting germs and the like.) Compare that to the six to 10 pounds of feed per pound of cow.

As a result, the greenhouse gas emissions of beef cattle per pound of meat are more than six times those of pigs and nearly nine times those of chicken. (Paradoxically, grass-fed cattle — often thought of as a greener alternative to feedlot beef — are actually bigger climate sinners, because grass-fed animals mature more slowly and thus spend more months burping methane.)

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Building fake meat

Plant-based meats aim to improve on that dismal environmental performance. Stanford University biochemist Pat Brown, for example, founded Impossible Foods after asking himself what single step he could take to make the biggest difference environmentally. His answer: Replace meat.

Researchers trawled through the scientific literature to find every available study measuring the greenhouse-gas footprint of meats and meat alternatives. Beef is by far the most emissions-heavy option, while plant-based meats and plant foods generally are linked to much lower levels of greenhouse gas emissions for production of a given quantity of protein. In the chart, (n) refers to the number of studies for each category of protein.

To do that, Impossible and its competitors basically deconstruct meat into its component parts, then build an equivalent product from plant-based ingredients. The manufacturers start with plant protein — mostly soy for Impossible, pea for Beyond, and potato, oat or equivalent proteins for others — and add carefully selected ingredients to simulate meat-like qualities. Most include coconut oil for its resemblance to the mouthfeel of animal fats, and yeast extract or other flavorings to add meaty flavors. Impossible even adds a plant-derived version of heme, a protein found in animal blood, to yield an even more meat-like appearance and flavor.

All this requires significant processing, notes William Aimutis, a food protein chemist at North Carolina State University, who wrote about plant-based proteins in the 2022 Annual Review of Food Science and Technology. Soybeans, for example, are typically first milled into flour, and then the oils are removed. The proteins are isolated and concentrated, then pasteurized and spray-dried to yield the relatively pure protein for the final formulation. Every step consumes energy, which raises the question: With all this processing, are these meat alternatives really greener than what they seek to replace?

Credit: Natalie Rubio et. al.

To answer that question, environmental scientists conduct what’s known as a life cycle analysis. This involves taking each ingredient in the final product — soy protein, coconut oil, heme and so forth — and tracing it back to its origin, logging all the environmental costs involved. In the case of soy protein, for example, the life cycle analysis would include the fossil fuels, water and land needed to grow the soybeans, including fossil fuel emissions from the fertilizer, pesticides and transportation to the processing plant. Then it would add the energy and water consumed in milling, defatting, protein extraction and drying.

Similar calculations would apply to all the other ingredients, and to the final process of assembly and packaging. Put it all together, and you end up with an estimate of the total environmental footprint of the product.

Plant-based meats are highly processed products in which proteins, fats, starches, thickeners, flavoring agents and other ingredients are mixed and formed into foods that resemble traditional meat products such as burgers, hot dogs and chicken nuggets.

Unfortunately, not all those numbers are readily available. For many products, especially unique ones like the new generation of plant-based meats, product details are secrets closely held by the companies involved. “They will know how much energy they use and where they get their fat and protein from, but they will not disclose that to the general public,” says Ricardo San Martin, a chemical engineer who codirects the Alternative Meats Lab at the University of California, Berkeley. As a result, most life cycle analyses of plant-based meat products have been commissioned by the companies themselves, including both Beyond and Impossible. Outsiders have little way of independently verifying them.

Even so, those analyses suggest that plant-based meats offer clear environmental advantages over their animal-based equivalents. Impossible’s burger, for example, causes just 11 percent of the greenhouse gas emissions that would come from an equivalent amount of beef burger, according to a study the company commissioned from the sustainability consulting firm Quantis. Beyond’s life cycle analysis, conducted by researchers at the University of Michigan, found their burger’s greenhouse gas emissions were 10 percent of those of real beef.

Indeed, when independent researchers at Johns Hopkins University decided to get the best estimates they could by combing through the published literature, they found that in the 11 life cycle analyses they turned up, the average greenhouse gas footprint from plant-based meats was just 7 percent of beef for an equivalent amount of protein. The plant-based products were also more climate-friendly than pork or chicken — although less strikingly so, with greenhouse gas emissions just 37 percent and 57 percent, respectively, of those for the actual meats.

Similarly, the Hopkins team found that producing plant-based meats used less water: 23 percent that of beef, 11 percent that of pork and 24 percent that of chicken for the same amount of protein. There were big savings, too, for land, with the plant-based products using 2 percent that of beef, 18 percent that of pork and 23 percent that of chicken for a given amount of protein. The saving of land is important because, if plant-based meats end up claiming a significant market share, the surplus land could be allowed to revert to forest or other natural vegetation; these store carbon dioxide from the atmosphere and contribute to biodiversity conservation. Other studies show that plant-based milks offer similar environmental benefits over cow’s milk (see Box).

Researchers compared the amount of land needed to produce a given amount of protein for meat, plant-based meat and plant foods. Once again, beef towers above the rest, largely because grazing animals need a lot of land to forage. Plant foods are shown to require more land than plant-based meats, but this difference is not meaningful because the estimates for plant foods include crops grown in low-yielding countries, while plant-based meats rely on ingredients grown under high-yield conditions.

A caution on cultivation methods

Of course, how green plant-based meats actually are depends on the farming practices that underlie them. (The same is true for meat itself — the greenhouse gas emissions generated by a pound of beef can vary more than tenfold from the most efficient producers to the least.) Plant-based ingredients such as palm oil grown in plantations that used to be rainforest, or heavily irrigated crops grown in arid regions, cause much more damage than more sustainably raised crops. And cultivation of soybeans, an important ingredient for some plant-based meats, is a major contributor to Amazon deforestation.

However, for most ingredients it seems likely that even poorly produced plant-based meats are better, environmentally, than meat from well-raised livestock. Plant-based meats need much less soy than would be fed to actual livestock, notes Matin Qaim, an agricultural economist at the University of Bonn, Germany, who wrote about meat and sustainability in the 2022 Annual Review of Resource Economics. “The reason we’re seeing deforestation in the Amazon,” he explains, “is because the demand for food and feed is growing. When we move away from meat and more toward plant-based diets, we need less area in total, and the soybeans don’t necessarily have to grow in the Amazon.”

But green as they are, plant-based meats have a few hurdles to clear before they can hope to replace meat. For one thing, plant-based meats currently cost an average of 43 percent more than the products they hope to replace, according to the Good Food Institute. That helps to explain why plant-based meats account for less than 1 percent of meat sales in the US. Advocates are optimistic that the price will come down as the market develops, but it hasn’t happened yet. And achieving those economies of scale will take a lot of work: Even growing to a mere 6 percent of the market will require a $27 billion investment in new facilities, says Swartz.

Steak hasn’t yet been well done

In addition, all of today’s plant-based meats seek to replace ground-meat products like burgers and chicken nuggets. Whole-muscle meats like steak or chicken breast have a more complex, fibrous structure that the alt-meat companies have not yet managed to mimic outside the lab.

Part of the problem is that most plant proteins are globular in shape, while real muscle proteins tend to form long fibers. To form a textured meat-like product, scientists essentially have to turn golf balls into string, says David Julian McClements, a food scientist at the University of Massachusetts, Amherst, and an editor of the Annual Review of Food Science and Technology. There are ways to do that, often involving high-pressure extrusion or other complex technology, but so far no one has a whole-muscle product ready for market. (A fungal product, sold for decades in some countries as Quorn, is naturally fibrous, but its sales have never taken off in the US. Other companies are also working on meat substitutes based on fungal proteins.)

The environmental impact of the two leading plant-based burgers, from Impossible Foods and Beyond Meat, is much less than a comparable beef burger, according to detailed studies commissioned by the two companies. Other experts note that these studies are difficult to verify independently because they rely on proprietary information from the companies.

McClements is experimenting with another approach to make plant-based bacon: creating separate plant-based analogs of muscle and fat, then 3D-printing the distinctive marbling of the bacon. “I think we’ve got all the elements to put it together,” he says.

Some critics also note that a shift toward plant-based meat may reinforce the industrialization of global food systems in an undesirable way. Most alternative meat products are formulated in factories, and their demand for plant proteins and other ingredients favors Big Agriculture, with its well-documented problems of monoculture, pesticide use, soil erosion and water pollution from fertilizer runoff. Plant-based meats will reduce the impact of these unsustainable farming practices, but they won’t eliminate them unless current farming practices change substantially.

Of course, all the to-do about alternative meats overlooks another dietary option, one with the lowest environmental footprint of all: Simply eat less meat and more beans, grains and vegetables. The additional processing involved in plant-based meats means that they generate 4.6 times more greenhouse gas than beans, and seven times more than peas, per unit of protein, according to the Hopkins researchers. Even traditional, minimally processed plant protein such as tofu beats plant-based meats when it comes to greenhouse gas. Moreover, most people in wealthy countries eat far more protein than they need, so they can simply cut back on their protein consumption without seeking out a replacement.

Relative footprint of different food stuffs. Credit: Economist

But that option may not appeal to the meat-eating majority today, which makes alternative meats a useful stopgap. “Would I prefer that people were eating beans and grains and tofu, and lots of fruits and vegetables? Yes,” says Bonnie Liebman, director of nutrition at the Center for Science in the Public Interest, an advocacy organization supporting healthy eating.

“But there are a lot of people who enjoy the taste of meat and are probably not going to be won over by tofu. If you can win them over with Beyond Meat, and that helps reduce climate change, I’m all for it.”

Plant-based milks

Meat isn’t the only source of animal protein with a high environmental cost. Dairy, too, causes large emissions of greenhouse gas from cud-chewing cows and sheep, and from growing feed. Here, too, plant-based alternatives, many of which are already mainstream options in the grocery store, may be an environmentally friendlier alternative — in some ways, at least.

Just how much friendlier they are, though, depends on how you measure their footprint. One option is to express environmental costs per quart of milk. By that measure, all plant-based milks shine. Soy milk, for example, requires just 7 percent as much land and 4 percent as much water as real milk, while emitting only 31 percent as much greenhouse gas. Oat milk needs 8 percent of the land and 8 percent of the water, while releasing just 29 percent as much greenhouse gas. Even almond milk — often regarded as a poor choice because almond orchards guzzle so much fresh water — uses just 59 percent as much water as real milk.

But not all plant-based milks deliver the same nutrient punch. While soy milk provides almost the same amount of protein as cow’s milk, almond milk provides only about 20 percent as much — an important consideration for some. On a per-unit-protein basis, therefore, almond milk actually generates more greenhouse gas and uses more water than cow’s milk.

Bob Holmes is a science writer in Edmonton, Canada. Find him at his website.

A version of this article was originally published at Knowable Magazine and has been republished here with permission. Sign up for their newsletter here. Knowable can be found on Twitter @KnowableMag

It affects 50 million Americans, and for now it’s incurable. Here’s what we know about the ear ringing disorder tinnitus — and its possible links to COVID-19 and vaccines

This past March, the CEO and founder of the Texas Roadhouse steakhouse chain, Kent Taylor, committed suicide. According to family and friends, he had no history of depression. But there was an exacerbating factor — Taylor, after recovering from a bout with COVID-19, suffered from severe tinnitus, an internal ringing sound in your ears. It’s a mysterious disorder that plagues tens of millions of Americans and many more worldwide.

His doctors do not know for sure, but the timing points to a possible link with COVID-19. After recovering from a mild case of the coronavirus last November, Taylor developed the tell-tale persistent ringing in his ears. It became so distracting that he had trouble reading or concentrating, and gradually grew worse. He was unable to sleep more than two hours a night.

Kent Taylor. Credit: William Deshazer/Business First

In early March, Kent met friends at his home in Naples, Florida, and led them on a yacht cruise in the Bahamas. They hoped he was finally getting better. Then his tinnitus “came screaming back in his head,” said Steve Ortiz, a longtime friend and former colleague. Soon after he committed suicide.

“Stress can dramatically increase Tinnitus distress, and the pressure Mr. Taylor must have felt due to the COVID pandemic must have been immense,” said John Hoglund, a hearing instrument specialist and Founder of Hoglund Family Hearing and Audiology Center, in an email. Nearly 1 in 5 people who got COVID-19 was diagnosed within three months with a psychiatric disorder such as anxiety, depression or insomnia, and many were linked to tinnitus.

Taylor’s case is not an anomaly. Thousands of cases of tinnitus have been linked to either COVID or the vaccines, although there is no hard proof, at this juncture, of anything more than association. Within a few hours of receiving his second COVID-19 mRNA vaccine dose in February, Gregory A. Poland, MD, a vaccinologist and director of the vaccine research group at the Mayo Clinic, knew something was not right. His ears started ringing relentlessly. He’s now one of thousands of people who have developed the condition following COVID-19 vaccination. According to a Facebook chat, many developed it within a day of receiving the vaccine. 

Late in October 2020, Paula Wheeler, who lives in central Kentucky, came down with a severe case of COVID-19 — high fever, pneumonia, and time in the hospital, and week after week of rolling symptoms. Within a few months, she developed tinnitus.

unnamed file

The ringing in her ears is a bit like the buzz of an old-time TV, Wheeler says. It’s unrelenting. “I’ve got that in my head in both ears all the time.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Taylor and Wheeler aren’t alone in experiencing tinnitus after recovering from COVID-19 or in some cases getting a shot. The sheer volume of newly reported cases is concerning. In a systematic review of hearing-related symptoms post-coronavirus, nearly 15 percent of patients reported tinnitus, although it’s worth noting that this figure may be an overestimate, per the study authors. As of May, the U.S. Vaccine Adverse Event Reporting System (VAERS) database had documented 1,486 cases of tinnitus following vaccination with any brand, meaning 0.95% of vaccine recipients reported experiencing the potential side effect. 

Widespread effects

Although it affects as many as 50 million Americans, according to the American Tinnitus Association, it is a chronic condition that we rarely hear about in the mainstream press. The list of sufferers is not fully documented, as many people don’t openly share about their conditions. 

Many notable individuals besides Taylor have experienced tinnitus, including well known stars such as Whoopi Goldberg, William Shatner, and David Letterman.

Shatner’s tinnitus began while he was filming the 1967 Star Trek episode Arena. “I was standing too close to a special effects explosion and it resulted in tinnitus. There were days when I didn’t know how I would survive the agony. I was so tormented by the screeching in my head I really thought I would not be able to go on!” he describes.

William Shatner as Captain Kirk in the episode “Arena.” Credit: CBS Photo Archive

Exposure to high volumes of sound often causes both hearing loss and tinnitus. Regular exposure to noises of any sort can contribute, especially working in a field with machinery running for hours each day. 

The rock band The Who, which has a Guinness World Record for the loudest concert ever recorded, had to stop touring because lead guitarist Pete Townshend developed severe tinnitus. Ozzy Osbourne, Neil Young, and Eric Clapton have also spoken out about how music has damaged their hearing and left them with challenging tinnitus problems.

On May 31, 1976, The Who played the loudest concert ever recorded. Credit: Ethan Russell

Military personnel are especially vulnerable. Tinnitus is the number one disability affecting veterans, impacting almost 2 million service members on disability. A large part of the National Center for Rehabilitative Auditory Research is dedicated to researching treatments and providing tinnitus education for veterans. “When you’re around aircraft generating 110-150 decibels, it’s a lot of background noise when working 15 hour days on deployment,” says Jean-Claude Wicks, a former U.S. Air Force Staff Sergeant in a video interview

But as Kent Taylor’s case illustrates, an increasing number of cases are not linked to loud noises, but to other, often unexplained, causes. It’s debilitating for many. Some 45% of tinnitus patients experience anxiety and 33% have major depression. In a large recent Swedish study, some 5-9% of patients with severe tinnitus attempted suicide. The added complication of a global pandemic adds to the suffering for many victims. Having symptoms of COVID-19 exacerbated existing tinnitus in 40% of respondents in another recent study. 

What causes tinnitus?

Humans have a long history of suffering from tinnitus. The condition has been documented as early as 1600 BCE, in the Egyptian medical manuscript Papyrus Ebers, where it was called “bewitched ear.” It’s likely that the condition preceded this early record. 

Papyrus ebers. Credit: New York Academy of Medicine

Over the years, proposed treatments from a wide spectrum of research have not shown much efficacy, likely because of the diverse etiology and heterogeneity of the condition. The most common perception is a persistent high-pitched ringing sound in one or both ears that others cannot hear, although tinnitus can manifest differently for different people. 

Common iterations experienced are buzzing, frying, hissing, ringing, or even whistling sounds. These phantom noises may be intermittent or continuous. Interestingly, the lexicon used to describe the sounds of tinnitus may vary by geography and the demographic background of the patient, according to tinnitus practitioners from around the country.

Tinnitus is not classified as a disease, but rather a symptom of one or more underlying conditions. The conditions often originates from a damaged inner ear, but the actual sound perception is generated within the brain itself. Therefore, tinnitus is very much a neurological disorder, making it complicated to treat

Tinnitus can also be caused by age-related hearing loss, vascular and circulation issues, cervical-spinal issues, certain audio vestibular issues such as Meniere’s Syndrome, or most often — exposure to loud noises. This is a Ted Talk by East Tennessee State University audiologist Marc Fagelson about tinnitus:

Comprehensive research on tinnitus began in the middle of the last century, but little was known about the condition, often prompting radical attempted cures. One clinician, Dr. Doug Lewis, recounts that in the early 1980s, a patient with unbearable tinnitus in one ear asked that his auditory pathway be severed for that ear, or he would commit suicide. This involved a complete resection of the acoustic or 8th cranial nerve. The operation was a success (the tinnitus disappeared), but this is very much an outlier case, and it resulted in the complete and permanent loss of hearing in that ear.

Treatment and diagnosis are often a roundabout affair. When a patient first notices the telltale signs of tinnitus, they typically see a local ENT, who looks for potential physical issues inside the ears. The next step is to see an audiologist, who tests the patient for hearing loss and the presence of tinnitus (they often occur together). These specialists can recommend various retraining therapies, cognitive behavioral therapies to reduce the emotional impact of tinnitus, or specialized hearing aids (with built-in tinnitus masking). But at this time, there is no scientifically proven cure for the underlying causes of tinnitus.

Where can people go for help?

Even without a reliable cure, resources from regional centers of excellence can help provide relief to suffering patients. For example, the Cleveland Clinic’s Tinnitus Management Clinic approaches tinnitus with a multidisciplinary team of audiologists, neurologists, dentists, and psychologists. This is an excellent approach, as the source of tinnitus can originate in different areas of the body connected to the auditory pathway. And treating the source of the issue may often provide relief from tinnitus symptoms. 

well tinnitus covid superjumbo

“There are several counseling and sounds therapy approaches which can be helpful with the reactions to the tinnitus…because the origins of tinnitus are so multivariate, study designs must identify the proper subgroups for analysis,” said Dr. Richard Tyler, a world-leading tinnitus researcher at the University of Iowa. 

One method of research is to focus on treatments for specific causes of tinnitus, such as noise-induced hearing loss. Another approach to finding relevant subgroups includes a statistical technique referred to as cluster analysis. This identifies homogeneous subgroups from a common set of variables without making presumptions on which variables are the most important. The cluster analysis methodology helps to facilitate properly designed clinical trials with well-defined criteria and meaningful primary and secondary endpoints, used to indicate trial success or failure. 

Promising research into tinnitus may also come from “Learning Health Networks” (LHNs), a relatively new framework which brings patients, doctors, clinicians and researchers together to collaborate more effectively. This framework was originally developed for kids with chronic conditions at the James M Anderson Center for Health Systems Excellence, at Cincinnati Children’s hospital. Due to their early growth and success, the LHN framework is now being scaled to work with adults with chronic conditions, including tinnitus.

False cures and false hopes

A plethora of snake oil salesmen have taken advantage of the lack of a cure for tinnitus, promoting unproven therapies and dubious cures. Such “experts” tend to show up on the Internet, where people with newly diagnosed tinnitus may begin their search for treatments, such as Facebook tinnitus support groups and the Reddit tinnitus community. Unrealistically raising false hopes doesn’t pass muster on ethical terms, and unproven treatments can potentially hurt patients even further.

Some unproven therapies include antibiotics, antidepressants, over-the-counter pain medications like Advil and aspirin, cancer drugs, and even diuretics. Higher doses of these medications demonstrate a greater chance of medical problems arising. Additionally, both high sodium and caffeine can make tinnitus symptoms worse.

Two products frequently included in Google search results (no doubt due to effective SEO marketing techniques) are “Tinnitus 911” and “SILENCIL”. Both companies claim that tinnitus is a result of “inflammation in the brain,” and that their cocktail of supplements calms inflammation and therefore reduces or eliminates tinnitus. None of this has ever been proven in clinical studies for Tinnitus 911, according to the British Tinnitus Association. Many user reviews for these products can be linked back to the companies. 

SILENCIL goes a step further by paying for advertisements that appear like legitimate science-based review articles, engaging in unethical marketing techniques. Research shows how creative these scammers are at stealing time and money from desperate tinnitus sufferers. An overwhelming number of buyers at Amazon report negative or no results. One reviewer reports: “I have taken this for the 30-day period and have not seen any decrease in my tinnitus. In fact, it seems worse!”

Most physicians steer away from untested therapies. In a recent survey in the American Journal of Audiology, 71% of participants found the supplements ineffective. Improvement in tinnitus was reported by 19% of patients, with 10% responding that a supplement actually had negative effects. The survey results confirm dietary supplements’ lack of efficacy for tinnitus and correlate with findings of previous studies, according to long-time tinnitus researcher, Dr. Claudia Barros Coelho.

Unfortunately, the market is flooded with unproven (and expensive) “cures.” Astrophysicist Carl Sagan once said something both simple and profound: “Extraordinary claims require extraordinary proof.” Sharp critical thinking techniques are your strongest defense against misinformation concerning tinnitus and other hearing issues — at least until social media platforms more rigorously separate information from misinformation. In all cases, you should consult a qualified tinnitus practitioner before buying any of these expensive dubious supplements and products for tinnitus.

The COVID factor

For many people with tinnitus, anxiety about catching COVID exacerbates existing symptoms. Rebecca Edgar, 29, has difficulty hearing her toddler when he talks to her from the backseat of her car. Most nights, she struggles to fall asleep, caught in a cycle of worry that the ringing in her ear is getting louder and recognizing that this stress is worsening her tinnitus symptoms. 

“I’ve had a constant high-pitched buzzing in my ear for the past 20 years, but there is no doubt that this is the worst my tinnitus has ever been,” said Edgar, of Essex County in southeast England. “I’m deaf in one ear, and I’m so scared that catching COVID-19 could destroy what’s left of my hearing.” Her fears are felt by many.

Not only can pandemic-related stress make symptoms worse, infections themselves, such as the flu and COVID-19, can cause ear inflammation. In one case study, a 45-year-old British man developed tinnitus and hearing loss in one ear after contracting a severe case of COVID-19. With the help of steroids, his condition did improve slightly. 

Although the link is not clearly defined, growing anecdotal evidence suggests that COVID vaccines can spark tinnitus cases. “People who have a pre-existing condition of hearing loss are more likely affected by COVID or vaccines,” theorizes Dr. Shaowen Bao, an associate professor of neuroscience and physiology at the University of Arizona. According to Bao, peripheral cytokines — protein messengers that immune cells use to communicate — from COVID-19 vaccines could cause or exacerbate tinnitus. 

Other possible explanations include temporary neuroinflammation and stress and anxiety around taking the shot. The general consensus is that the great majority of people won’t develop tinnitus from vaccines. However, it is important to keep a lookout for any emerging trends, as there is no long-term data available at this time.

Mayo Clinic vaccinologist Gregory Poland, himself a tinnitus sufferer after getting COVID vaccines, has speculated on what the vaccines have in common that could touch off or exacerbate the ear ringing.

Gregory Poland. Credit: Illinois Wesleyan University

“The most generous explanation is that these are highly immunogenic vaccines, and they have, compared with other vaccines that we give, high reactogenicity rates,” he said. “So, my hunch is that this is an off-target inflammatory response. … My second hunch is that over time, these off-target inflammatory responses will either disappear completely or considerably diminish over time, which raises a really important question: Do I get a third COVID-19 vaccine dose if those become recommended?”

Finding a cure

Tinnitus research still has a long way to go, and scientists will need to pursue many different paths for potential cures. In the meantime, tinnitus sufferers should only seek professional help from reputable sources (ENTs, doctors, audiologists, therapists), and view all online “miracle cures” with extreme suspicion. While there are therapies which can help relieve the symptoms of tinnitus, there is currently is no silver bullet cure for the underlying pathophysiology. We can only manage the symptoms of tinnitus at this time.

Kent Taylor realized how essential evidence-based, scientific research is. Taylor was a retired US army sergeant. Before his death, he funded a clinical study to help members of the military who suffer from the condition. 

“In true Kent fashion, he always found a silver lining to help others,” his family said in a posting. 

Taylor was a team player in all aspects of life, and it will take a team-oriented approach to ultimately tackle tinnitus and bring relief to the many millions who desperately need it. It is a world-wide chronic condition of pandemic proportions.

Jeffery Reagan is a tinnitus research advocate and freelance writer.  A tinnitus sufferer himself, he believes that finding better therapies and ultimately discovering a cure for tinnitus rests with evidenced-based scientific research and well-designed clinical trials — with an emphasis on patient participation in a learning health network model.  Mr. Reagan holds a degree in information technology from the University of Cincinnati and his IT career has spanned 5 decades, with a focus on data modeling and analytics.  He has set up a support group and information resource, Stop the Ring and can be reached at [email protected].

This article previously appeared on the GLP on July 20, 2021.

GLP Facts & Fallacies Podcast and Video: Curing ‘incurable’ leukemia? Cowardly corporations; Glyphosate hasn’t tainted school lunches

A new gene-editing technique known as base editing may have helped doctor’s cure a young girls “incurable” cancer. Why are so many companies afraid to stand up to environmental groups that attack their products with scientific misinformation? Is your child’s school lunch tainted with harmful pesticides? There isn’t a shred of evidence behind that allegation.

Join geneticist Kevin Folta and GLP contributor Cameron English on episode 200 of Science Facts and Fallacies as they break down these latest news stories:

T-cell acute lymphoblastic leukemia (T-ALL) is an especially aggressive form of cancer. It progresses quickly, making the disease difficult to treat with standard therapies such as stem-cell transplants, radiation and chemotherapy. Fortunately, a novel gene therapy may give some patients and their doctors the upper hand they need to beat T-ALL. Using a relatively new gene-editing technique called base editing, researchers engineered a young patient’s T-cells to target and attack the cancerous cells. This experimental treatment prevents the patient’s immune system from destroying the modified T-cells; it also keeps the cells from attacking each other. So far, the patient, named Alyssa, has responded well to the therapy. Her cancer has not returned six months after she received this CAR-T therapy. That’s a generally encouraging sign, but experts say the disease is more likely to return shortly after a seemingly successful treatment. The fact that it hasn’t may indicate that Alyssa’s cancer has been cured. Of course, only time will tell if her “incurable” disease has been cured.

Environmental activist groups have successful shut many companies out of public discussions about the safety of their products. Using a carefully orchestrated approach employed successfully against tobacco companies in years past, so-called “green groups” have attacked the makers of pesticides, cell phones, GM crops, baby powder and many other useful consumer and industrial products. Oddly, many of the firms targeted in this way have refused to defend themselves in the public square, preferring to settle lawsuits out of court and keep their heads down, hoping the controversy will go away on its own. Does this passive strategy work, or does it actually encourage activists to behave more aggressively?

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Infamous anti-GMO activist Zen Honeycutt, founder of Moms Across America, appeared on Food Chain Radio recently to promote a host of falsehoods about the herbicide glyphosate. After linking the weedkiller to cancer, autism and other serious conditions it almost certainly doesn’t cause, Honeycutt alleged that lunches served in America’s public schools are tainted with the “toxic” pesticide. It’s this sort of activism that unnecessarily alarms the public about safe and necessary crop-protection tools. Worst of all, it jeopardizes the well being of public school students, some of whom depend on school meals to get the nutrition they need.

Kevin M. Folta is a professor, keynote speaker and podcast host. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Visit his website and follow ACSH on Twitter @ACSHorg

‘The harder you push, the better you’ll perform’: Here’s how physical activity boosts your brain’s processing power

You have heard it before, but it’s now even clearer: Physical activity leads to improved performance at school, at least in the national tests adolescents have been measured in. A new study of more than 2,000 randomly selected Norwegian adolescents confirms this. It also shows that the results were even better for those who also experienced improved physical fitness and that it had no negative impact if you took some time out from lessons to get active instead.

Both lead to improvement

In the study, 30 different schools added two additional hours of physical activity for adolescents each week. The pupils at ten of the schools participated in teacher-led activities in which a teacher decided what the adolescents would do, while pupils at ten other schools led the activities themselves. The pupils at the final ten schools followed their ordinary schedule and were included for the purpose of comparison.

“The surprising thing for us was that both of the test groups showed improved performance in national tests during the project period,” says Runar Barstad Solberg. He carried out the study in connection with his doctoral degree, together with a large project group.

The study forms part of the ScIM ‘Schools in Motion’ study (link in Norwegian).

Higher intensity, better results

“The findings indicate that adolescents find it easier to learn theory if they are more physically active,” he says.

But, more unexpectedly: The project also shows that improved learning is not affected by how intense the activity is, up to a certain point. It’s slightly complicated.

“All additional physical activity has an impact, but you will experience even better results if you also improve your physical fitness,” Solberg says.

Adolescents clearly performed better in the tests even when they led the activities themselves. From the ScIM study at Bakkaløkka School on Nesodden. Credit: the Norwegian School of Sport Sciences

Better grades

The adolescents at the 30 schools were tested in Norwegian and maths. And there is no doubt that they got better during the period of increased activity: They were tested using “[standarised] national tests”. (in Norwegian)

“The average score on these tests is 50 points. On average, the adolescents in the study improved their results by more than two points, regardless of test group. This change is both clear and positive,” Solberg states.

The results are very positive even though two points might not seem like a lot and it is hard to pinpoint exactly what the impact is. “This is the same as saying that, on average, everyone improved their grades.”

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

The cause can be found in the brain

However, the most interesting aspect is that performance increases by nearly as much in both of the test groups, i.e. at the schools where adolescents led their activities themselves and at the schools where activities were teacher-led and conducted at a specific intensity.

It even happened if the self-led group took things more easily.

“This means that it is the physical activity itself that is crucial,” Solberg says.

He and the other researchers behind the ScIM study (see factbox) believe that the causes can be found in the brain, as well as in physical and social aspects. Increased activity triggers mechanisms in the brain that can improve concentration and learning.

Physical breaks, during which you are able to do things you enjoy and can work on subjects in different ways also likely contribute to easier learning in sedentary subjects.

The study shows that in order to achieve the best learning potential, activities need to be intense enough to also improve physical fitness. Credit: Sutterstock/NTB

“It works”

Runar Barstad Solberg and the rest of the project group asked the teachers to report on how well they managed to complete the project each week. He was overwhelmed by the response.

“It has been incredible. The weekly reports from the schools show that both teachers and pupils were able to meet 80 per cent of the maximum target. That is excellent,” he says and explains that many of the teachers who were initially sceptical of the project ended up being convinced.

“This means that the project can be implemented in the real world and that a programme to increase physical activity during the school day can be introduced at other schools and, above all: It shows that physical activity influences results.”

Not without cost

“I guess it’s just a matter of implementing this everywhere? The school day simply becoming more ‘physical’?”

“Well… I’m not sure. The Ministry of Education and Research and the Ministry of Health and Social Care have been keen to support the project from the beginning. But it does not come without cost,” Solberg says.

The 30 schools that participated received NOK 900 per pupil in order to implement the project. In order to scale the project to a national level, the total cost would increase significantly. In addition to extended school hours, school transport would also have to be changed in some areas.

“Of course, one option would be not to extend school hours but to take the time away from theoretical lessons, but even if theoretical lessons were cut by just ten per cent, but we are a long way off making such a change,” he says.

Even so, both local authorities and individual schools frequently get in touch. “It’s fun. We have demonstrated that physical activity is relatively simple to implement and also a positive and we have figures that show the outcomes. There is no doubt that active pupils learn more effectively, even if time spent on theory is sacrificed.

Kjetil Grude Flekkøy is a communications advisor.

A version of this article appeared originally at Science Norway and is posted here with permission. Check out Science Norway on Twitter @Sciencenorwayno

‘The Hemsworth Alzheimer’s disease gene’: Revisiting the nature-nurture debate

It’s frightening when your future health seems indelibly determined, and the prospects are not good. That’s what Chris Hemsworth, the star of Thor and Extraction, has been struggling with — what he calls his “biggest fear” — not being able to recognize or remember his loved ones as he ages.  

While working last fall on Limitless, his National Geographic docuseries about prolonging life and combating aging. the 39-year-old actor underwent a genetic test, discovering that he has an elevated risk of developing Alzheimer’s, the single most cause of dementia.

Credit: Journal of Neurology, Neurosurgery & Psychiatry

Hemsworth was found to carry two copies of a gene called APOE4, one each from his father and mother, a major genetic risk factor. Only 2 to 3% of the population have two copies of APOE4, according to a 2021 study by the National Institutes of Health. It’s also associated with early onset, which can happen anytime between someone’s 30s and mid-60s. 

Hemsworth was diagnosed during the filming of episode five of his docuseries.  During filming, the show’s longevity doctor first told Hemsworth about the finding off-camera. That “was pretty shocking,” he told Vanity Fair in an interview.

screenshot pm
Credit: Disney

Alzheimer’s disease is the single most common cause of dementia with one in 9 people over 65 living with AD globally. It is, however, not all bad news for Hemsworth and others like him. Last year was a landmark time for Alzheimer’s research with two treatments gaining FDA approval

Furthermore, a recent trial for using gene therapy to mitigate the potentially damaging effects of APOE4 gene expression has returned promising early data. The study, which was labelled as “a very provocative, very intriguing approach,” by the director of the neuroscience division at the National Institute on Aging, is a collaboration between the Alzheimer’s Drug Discovery Foundation and Weill Medical College of Cornell University. It involved injecting a low dose of gene therapy to promote the production of a protective protein in cerebrospinal fluid. 

Administration of the treatment to a small cohort of patients resulted in a marked drop in the main toxic drivers of AD; amyloid and tau. The results were so promising that the trial has now advanced to the next stage. 

However, there is another critical factor to consider when we discuss the links between genetics and AD. Alzheimer’s is not a genetic disorder in the same way, say, cystic fibrosis or Huntington’s disease is, where carrying the disease-associated genes always results in manifestation of the illness. Those are called Mendelian or single-gene diseases. 

Alzheimer’s is more complex genetically. Genes have a role but not just a single gene, and genetics is only part of a larger equation. This is a fortunate biological situation because it is estimated that at least 25% of the global population carry at least one copy of the APOE4 gene. If AD worked like Huntington’s we would have a serious epidemic on the horizon. 

So, what else drives the development of AD? And what role do genetic variations in Alzheimer’s patients play in its development? Let’s take a dive into the mechanics of AD’s most common genetic risk factor.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

How APOE4 increases AD risk

APOE stands for apolipoprotein E. It can be found in various forms in the human body depending on an individual’s genome. The protein plays a key role in cholesterol metabolism; specifically, the ‘packaging’ and clearance of cholesterol to maintain normal levels within the body. The E4 variant of the gene gives rise to a form of the protein that exhibits impaired function in relation to cholesterol metabolism. This is a critical change because cholesterol metabolism in the brain has to be tightly regulated as Dr. Allison B. Reiss, an associate professor of medicine at NYU Long Island School of Medicine explains:

The brain is full of cholesterol and needs cholesterol to develop and produce nerve cells. The balance and transport of cholesterol within the brain are carefully controlled, and lipids are very important in brain function. Most prominent of the lipid-related proteins in the brain is ApoE, a protein that transports lipids in the brain and elsewhere.

If an individual carries two copies of the APOE4 gene, the control of cholesterol levels in the brain can become impaired. The exact mechanism behind how this can lead to the development of Alzheimer’s disease is yet to be fully understood but studies suggest cholesterol metabolism has an impact on one of the two key pathogenic hallmarks of AD: Amyloid beta plaque formation. 

Further light was shed on this mechanism with the release of a University of Virginia School of Medicine study that tied the mechanism to a specific cell type in the brain. Astrocytes (the most abundant cell type in the brain) play an important role of maintaining a biochemical balance in the brain. The researchers analyzed the effect of APOE4 mutations on astrocyte behavior and what they found was compelling. 

Astrocytes were found to drive amyloid beta plaque formation by manufacturing excess cholesterol and distributing it to the surrounding brain cells. The increase in surrounding cholesterol triggers an upturn in amyloid beta production, thus driving an acceleration in the formation of amyloid beta plaques. 

“This study helps us to understand why genes linked to cholesterol are so important to the development of Alzheimer’s disease,” said Heather A. Ferris, MD, PhD at UVA’s Division of Endocrinology and Metabolism:

Our data point to the importance of focusing on the production of cholesterol in astrocytes and the transport to neurons as a way to reduce amyloid beta and prevent plaques from ever being formed.

It all sounds like an ironclad mechanism on paper. APOE4 expression results in defective protein production. This causes dysfunctional cholesterol manufacture which, in turn, drives amyloid aggregation leading to AD. So why doesn’t carrying the APOE4 gene result in a 100% risk of developing AD? It’s not even close to that figure. 

What additional risk factors are in play here?

The development of AD is greatly influenced by external factors. The most obvious is age; Alzheimer’s is mostly a disease of the elderly. It takes decades to manifest, and the older you are, the greater the risk. Age is, however, not the only ‘non-genetic’ contributor to the development of AD and the key to the other factors in play lies partly in the major role APOE plays in the human body.

The role of genetics can be summarized by a phrase that has become almost dogmatic among many medical practitioners: “Your genes load the gun, but lifestyle often pulls the trigger”. The CDC highlights eight key contributing factors to AD risk that are not genetic: lack of exercise, smoking, excessive alcohol consumption, obesity, diabetes, hypertension, depression, and hearing loss. 

Credit: CDC

The CDC went on to conduct a study on adults over 45 to investigate how integral these factors are to the development of AD. There was one key take home message: Adults with 4 or more of the aforementioned risk factors were significantly more likely to experience cognitive decline with (1 in 4 versus 1 in 25).

The CDC and many other medical authorities since created webpages promoting lifestyle choices to reduce the risk of developing AD, and they all echo the same theme: Eat healthily, exercise avoid alcohol in excess and avoid smoking. And avoid saturated fat — the kind that sends your cholesterol through the roof. This is particularly important if you carry the APOE4 variant. The jury is out on to what extent consuming saturated fat and cholesterol increases your risk. In APOE4 carriers, however, avoiding saturated fats and cholesterol can help to moderate the amount of cholesterol in the brain, potentially reducing the risk of kick-starting the pathology.

There remains a lack of a universally agreed understanding of the relative roles of diet and lifestyle to contracting AD. If you look online, you will find a wealth of claims from one extreme to the other. The contributing factors and their relative contribution to contacting the disease remain murky. So, what can each of us do proactively?

Dr. Darren Gitelman, senior medical director of the Advocate Memory Center at Advocate Lutheran General Hospital in Park Ridge, Ill explains:

There’s no sure-fire way to prevent Alzheimer’s disease, so knowing whether you have ApoE4 won’t lead to better treatment at this time. We encourage everyone, regardless of their E4 status, to lead a healthy lifestyle to reduce their risk of dementia.

Rare case of genetic AD

While 99% of Alzheimer cases appear caused by a combination of genes and other external factors, there is a rare hereditary form of AD called familial Azheimer’s that breaks all the genetic rules, giving patients no room for prevention. It is mediated by mutations in one of a variety of genes; presenilin 1 (PSEN1), the most common; presenilin 2 (PSEN2); and amyloid precursor protein (APP) genes. A mutation in one of these three genes and sadly, like carrying the gene for Hutington’s, your fate is locked in: AD is not only guaranteed. It is also more likely to hit you earlier in life.

Scientists will continue to write the genetic story of Alzheimer’s disease as we search for a cure. Genes may not be the only contributing factor to its development, but they are the driving factors and will be the focus of research for years to come. 

Sam Moxon has a PhD in tissue engineering and is currently a research fellow in the field of regenerative medicine. He is a freelance writer with an interest in the development of new technologies to enhance medical therapies. Follow him on Twitter @DrSamMoxon

Viewpoint: Precautionary near zero-risk standard is an impossible policy stifling European innovation and productivity. Here’s a safe alternative

the problems with precaution a principle without principle jpeg q r vn yd izn h fat qrnwv cvgopb bl sed

Those suffering in the dark and in the cold are not gluing themselves to artworks.

Hungry people don’t care if some guru sanctified the seeds before planting.

Families unable to pay their bills would welcome a new factory in their town.

Politicians who preach their virtue vision to an activist minority won’t get much support.

After 30 years of squandering the peace dividend, deindustrialising economies and ignoring facts and evidence in their ideology-driven policies, Western leaders are starting to realise the need for hard decisions, compromising perfect world dreams of tomorrow for a better reality today. After two years of global pandemic, an energy crisis in Europe, global food insecurity and inflation, we can no longer continue to promise a docile public the myth of zero risk, free money and a world of rainbows and butterflies. Leaders have to return to risk management where not everyone gets what they want but they could get what they need.

President Nixon and Henry Kissinger, two practitioners of ‘realpolitik’.

This is the third part of a series that looks at the Industry Complex. Environmental activism has become an inflexible ideology that has integrated a strict anti-industry, anti-capitalism, degrowth philosophy into its dogma to the point where industry has been delegitimised (tobacconised) from policy dialogue by intolerant rhetoricians. Cleverly imposed on policymakers as virtue politics, environmental-health activists have relentlessly pushed consumers and economies (particularly in Europe) to the edge of a cliff. Giving extremists everything they want is not good long-term strategy in a liberal democracy but policymakers seem handcuffed now to a series of tools and concepts that make resistance quite challenging. Such fundamentalist dogma could only be disarmed by the return to a pragmatic realism in Western politics.

Idealist virtue politics has to be seen for what it is: the wrong policy approach at the wrong time.

It is time for a return to Realpolitik

It is time for regulators to grow a pair and stand up to these outspoken voices of moralising revulsion and intolerance from activist quarters. It is time for them to start doing their job: making the hard decisions and managing risks rather than promising a world of zero risk to a docilian public that has come to expect simple solutions to complex problems. It is time for a return to Realpolitik – of making the best choices from a finite list of options and circumstances rather than continuing their false promises that someone else will have to pay for.

Realpolitik has rarely been used in policy discussions since the end of the Cold War. Indeed the West has enjoyed a peace dividend since the fall of the Berlin Wall which has allowed Western leaders to uncompromisingly pursue their ideals, pay for any consequences of bad decisions and pretend the milk and honey would flow indefinitely. This Western hegemony had few far-away threats and the electorates came to expect to be given everything they wanted. And we could afford it (… until the pandemic).

It is not a new concept. The term “Realpolitik” was in use several decades before Bismarck (commonly referred to as the father of Realpolitik). It was developed by Ludwig von Rochau who tried to introduce Enlightened, liberal ideas, post 1848, into a political world that was embedded in less rational cultural, nationalistic and religious power dynamics (much like the green dogma pushing many Western political spheres today). Realpolitik is often best understood by what it is not: it refers to decisions not made solely on issues of ideology and morality. In other words, Realpolitik refers to pragmatic decisions based on best possible outcomes and compromises (something done when leaders have to face unpleasant realities). Ideologues can easily ignore scientific facts when imposing their power but Realpolitikers will follow the best available science while appealing to reason.

  • In agricultural policy, the EU Farm2Fork strategy is based on ideology and morality (that organic is morally better and not industrial-based). But given the recent food inflation and the threats to global food security, a more practical, rational strategy that would focus on sustainable intensification might be a better choice.
  • In energy debates, European leaders stupidly gave their loud, activist minority what they wanted (closing nuclear reactors and abandoning fossil fuels) with no pragmatic alternatives or rational transition plan. A Realpolitiker would not have shut down the nuclear power stations until the energy transition was safely achieved.

There are moves to undo some of the more stupid purely ideological strategies in the Green Deal, but Europe might have to wait for blind, uncompromising ideologues like Frans Timmermans to quietly leave the EU stage. For European consumers, this can’t come soon enough.

from farm to fork eu green deal

Precaution versus Realpolitik

The precautionary principle appeals to an ideologue’s view of the world, unwilling to settle for anything less than perfect. Under the European Environment Agency’s version of precaution (the reversal of the burden of proof), if a technology, substance or product cannot not be proven to be safe, precaution must be taken. Safe and certain are absolutes and advocates have been quick to call for precaution for miniscule exposures to suspected hazards (see the whole endocrine disruption campaign). Actual evidence or reason are not necessary to justify moral-based, ideological decisions.

But should we let perfection be the enemy of the good? Precaution has led to the removal of many highly efficient technologies and substances because regulators were demanding 100% safe. In cases like banning certain neonicotinoid insecticide seed treatments to prevent some concocted bee apocalypse, the alternatives have proven to be worse than the banned substance and many farmers simply abandoned vulnerable crops like oilseed rape (making the situation worse for the bees).

Credit: LeadershipQuote.org

I have argued that we should aim for safer rather than safe. Safer is something risk managers in industry measure and continually strive for while safe is an emotional ideal that cannot be measured or, for that matter, reached. We will never have safe, but we can always strive for safer. This is where a more pragmatic, Realpolitik approach would be more successful than any arbitrary risk aversion. We saw the collapse of the zero-risk precautionary approach after two years of COVID-19 lockdowns that destroyed communities and economies while increasing mental health and domestic abuse issues. Today we have accepted that we have to live with a certain number of coronavirus infections and decisions have become more pragmatic, risk-oriented and rational. We won’t be 100% safe, but we can continually strive to be safer.

precautionaria symptoms x

Realpolitik accepts that a perfect world is a pipe dream. Freed from the shackles of seeking the totally safe, they get to work on risk management, reducing exposures to as low as reasonably possible (achievable) and making the world (products, substances, systems…) better – safer. They seek a world with lower risks for more people, not zero risks for all people. We need to turn away from the fundamentalist activist mindset and adopt a more industrial, scientific approach (as seen in product stewardship): of continuous improvement, constant iteration and technological refinement.

We cannot afford to continue with this luxury ideology of totally safe codified in the hazard-based, precautionary approach to policy. We need chemicals that can disinfect, pesticides that can protect plants, plastics that can prevent foodborne risks and energy sources that can keep the lights on. Blanket precautionary bans based on narrow ideologies (like the irrational demand for natural substances only) and arbitrary restrictions (eg, that exclude engaging with corporations) create needless shortcomings and hardships we should not be imposing on the most vulnerable.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Engage with stakeholders who can make a difference

Realpolitikers would not be dissuaded from engaging with industry actors (especially as they have access to key technological solutions). They would not tolerate the righteous tobacconisation approach of the ideologues – of excluding the stakeholders with the greatest options and capacities. Just imagine if, when the first COVID vaccines were shown to be relatively effective, governments then declared that corporations must not be involved in the development and implementation of the vaccines. That would be pure stupidity. But this is exactly the type of nonsense we hear from agroecologists who demand that industrial-based research technologies be excluded from farming practices in developing countries. I suppose we need more famines before Realpolitik returns to agricultural discourse.

We are in a world in crisis, craving innovative solutions to serious problems. Innovation entails risk-taking, iteration and continuous improvements. The industrial, capitalist approach rewards innovative thinking (while the precautionary mindset abhors any uncertainties that such innovations could lead to). The present leadership in Brussels, fixated on their Green Deal “save the world” myopia, are the wrong people to lead us out of this crisis (of their own making).

Last year we saw what I would only hope to be the last dying gasp of the precautionary mindset dominating our regulatory thinking. In the early days of the COVID-19 vaccine rollout, there were several cases of blood-clotting that might have been associated with the vaccine. Several regulators, including European Commission Vice-President, Paolo Gentiloni, called for a precautionary halt, particularly of the AstraZeneca jab, until citizens could be certain of the vaccine’s safety. Given the benefits of the vaccine and the lockdown fatigue, a more pragmatic policy response was adopted (drowning out the precautionista protests, but a bit late for AstraZeneca). Could we have enjoyed the benefits of mRNA vaccine technologies if two years of pandemic horror had not woken us from our precautionary slumber? The same technology is now looking into cancer treatments – should this be banned?

Credit: Soham Sen via ThePrint

A return to Realpolitik is a return to risk management, turning away from several decades of zero-risk precautionary uncertainty management. Such a policy approach prevails when the benefits are so highly in demand with the need for innovative solutions suffocating any irrational fear from uncertainties. An unfortunate few may suffer consequences from a vaccine but a practical, risk management approach would put those risks into a rational perspective (and continually seek to lower those risks).

We actually were never able to afford the perfect-world demands of these precautionary ideologues – affluent righteous zealots who assumed everyone deserved the same benefits they enjoyed. But now the bills are piling up and are demanding hard-headed, practical solutions. It is, indeed, time to move away from a the era of virtue-driven politics and return to Realpolitik.

How to rectify this mess

As readers of this site know, I draw a clear connection between the rise of the use of the precautionary principle with an activist hegemony and the shift away from innovative-friendly policies. This goes hand-in-hand with the anti-industry narrative that is pushing Western societies to a post-capitalist, degrowth regulatory environment. But the present energy, food and public health crises have pierced a gaping hole in the activist campaign balloon, deflating their idealism. So how do we now put Realpolitik back to the centre of policy thinking?

I have argued that industry should get up and walk out of the EU policy process unless the European Commission introduces some fair procedures applied equally to all stakeholder groups. I have also called for a White Paper that can articulate a clear guidance for the risk management process (at the moment it seems like the precautionary principle is seen, erroneously, as the only risk management tool). We need a post-COVID analysis of the failures of precaution and the role of scientific advice. There also should be some guidelines and weighting on the level of influence some NGOs should be allowed to have on the policy process. Too often today groups that represent less than 10% of the European electorate are allowed to drive most European policy … with no regard for the interests of industry, consumers and economic reality.

Industry also has to stand up for itself. The hate campaigns against companies, against capitalism and against innovative solutions need to be countered. For too long they have stayed silent while ethically-challenged zealots spread their lies and agenda against them. What industry considered as diplomatic engagement has now become timid defeatism. Their science needs to be considered on the basis of the facts and data they present, not on the source of funding. When attacking industry becomes an easy excuse for policymakers, then these showmen need to be called out for that; they need to know there are consequences for applying processes and decisions that go against the interests of consumers and the economy, that go against reason and only affirm their hateful, destructive ideologies. Industry actors have to be less polite and diplomatic. Being respected doesn’t win points in the policy arena.

Europe is becoming less competitive, less conducive to research, less productive and less successful on many industrial fronts. Companies are now facing unnecessary costs and restrictions, many are leaving and producing or researching in other parts of the world, consumers are paying more and getting less. Farmers are working harder for lower yields (and increased food imports). Europe is losing out because of bad policies that have been rooted in an anti-industry, anti-technology, anti-growth ideology.

With the present economic, social and environmental threats, this is not the time to tolerate the hateful, closed-minded environmental dogmatists. We need our leaders to back to Realpolitik – not seen since the 1980s – of pragmatic solutions, creative thinking and open-minded compromises.

David Zaruk has been an EU risk and science communications specialist since 2000, active in EU policy events from REACH and SCALE to the Pesticides Directive, from Science in Society questions to the use of the Precautionary Principle. Follow him on Twitter @zaruk

A version of this article was originally posted at Risk Monger’s website and has been reposted here with permission.

‘Time for a reality check’: How close is artificial intelligence (AI) to thinking like humans?

topbots x
Last month, Deepmind, a subsidiary of technology giant Alphabet, set Silicon Valley abuzz when it announced Gato, perhaps the most versatile artificial intelligence model in existence. Billed as a “generalist agent,” Gato can perform over 600 different tasks. It can drive a robot, caption images, identify objects in pictures, and more. It is probably the most advanced AI system on the planet that isn’t dedicated to a singular function. And, to some computing experts, it is evidence that the industry is on the verge of reaching a long-awaited, much-hyped milestone: Artificial General Intelligence.

Unlike ordinary AI, Artificial General Intelligence wouldn’t require giant troves of data to learn a task. Whereas ordinary artificial intelligence has to be pre-trained or programmed to solve a specific set of problems, a general intelligence can learn through intuition and experience.

An AGI would in theory be capable of learning anything that a human can, if given the same access to information. Basically, if you put an AGI on a chip and then put that chip into a robot, the robot could learn to play tennis the same way you or I do: by swinging a racket around and getting a feel for the game. That doesn’t necessarily mean the robot would be sentient or capable of cognition. It wouldn’t have thoughts or emotions, it’d just be really good at learning to do new tasks without human aid.

This would be huge for humanity. Think about everything you could accomplish if you had a machine with the intellectual capacity of a human and the loyalty of a trusted canine companion — a machine that could be physically adapted to suit any purpose. That’s the promise of AGI. It’s C-3PO without the emotions, Lt. Commander Data without the curiosity, and Rosey the Robot without the personality. In the hands of the right developers, it could epitomize the idea of human-centered AI.

But how close, really, is the dream of AGI? And does Gato actually move us closer to it?

For a certain group of scientists and developers (I’ll call this group the “Scaling-Uber-Alles” crowd, adopting a term coined by world-renowned AI expert Gary Marcus) Gato and similar systems based on transformer models of deep learning have already given us the blueprint for building AGI. Essentially, these transformers use humongous databases and billions or trillions of adjustable parameters to predict what will happen next in a sequence.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

The Scaling-Uber-Alles crowd, which includes notable names such as OpenAI’s Ilya Sutskever and the University of Texas at Austin’s Alex Dimakis, believes that transformers will inevitably lead to AGI; all that remains is to make them bigger and faster. As Nando de Freitas, a member of the team that created Gato, recently tweeted: “It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory…” De Freitas and company understand that they’ll have to create new algorithms and architectures to support this growth, but they also seem to believe that an AGI will emerge on its own if we keep making models like Gato bigger.

Call me old-fashioned, but when a developer tells me their plan is to wait for an AGI to magically emerge from the miasma of big data like a mudfish from primordial soup, I tend to think they’re skipping a few steps. Apparently, I’m not alone. A host of pundits and scientists, including Marcus, have argued that something fundamental is missing from the grandiose plans to build Gato-like AI into full-fledged generally intelligent machines.

I recently explained my thinking in a trilogy of essays for The Next Web’s Neural vertical, where I’m an editor. In short, a key premise of AGI is that it should be able to obtain its own data. But deep learning models, such as transformer AIs, are little more than machines designed to make inferences relative to the databases that have already been supplied to them. They’re librarians and, as such, they are only as good as their training libraries.

A general intelligence could theoretically figure things out even if it had a tiny database. It would intuit the methodology to accomplish its task based on nothing more than its ability to choose which external data was and wasn’t important, like a human deciding where to place their attention.

gato e

Gato is cool and there’s nothing quite like it. But, essentially, it is a clever package that arguably presents the illusion of a general AI through the expert use of big data. Its giant database, for example, probably contains datasets built on the entire contents of websites such as Reddit and Wikipedia. It’s amazing that humans have managed to do so much with simple algorithms just by forcing them to parse more data.

In fact, Gato is such an impressive way to fake general intelligence, it makes me wonder if we might be barking up the wrong tree. Many of the tasks Gato is capable of today were once believed to be something only an AGI could do. It feels like the more we accomplish with regular AI, the harder the challenge of building a general agent appears to be.

For those reasons, I’m skeptical that deep learning alone is the path to AGI. I believe we’ll need more than bigger databases and additional parameters to tweak. We’ll need an entirely new conceptual approach to machine learning.

I do think that humanity will eventually succeed in the quest to build AGI. My best guess is that we will knock on AGI’s door sometime around the early-to-mid 2100s, and that, when we do, we’ll find that it looks quite different from what the scientists at DeepMind are envisioning.

But the beautiful thing about science is that you have to show your work, and, right now, DeepMind is doing just that. It’s got every opportunity to prove me and the other naysayers wrong.

I truly, deeply hope it succeeds.

Tristan Greene is a futurist who believes in the power of human-centered technology. He’s currently the editor of The Next Web’s futurism vertical, Neural. Follow Tristan on Twitter @mrgreene1977

A version of this article appeared originally at Undark and is posted here with permission. Check out Undark on Twitter @undarkmag

Podcast and video: GE chestnut tree coming soon? Tylenol doesn’t cause autism; Damar Hamlin-COVID vaccine controversy

After years of delay, a genetically engineered chestnut tree may finally receive USDA approval. Is it headed for a new home in America’s Appalachian forests? There’s no evidence that Tylenol causes autism, but that hasn’t stopped lawyers from filing lawsuits claiming it does. Did an mRNA COVID shot cause Damar Hamlin’s cardiac arrest? As of now, there is no evidence to support that speculation.

Join geneticist Kevin Folta and GLP contributor Cameron English on episode 200 of Science Facts and Fallacies as they break down these latest news stories:

After many years of research and regulatory review, a blight-resistant, genetically engineered chestnut tree known as Darling 58 is poised to receive final approval from the USDA. The project was designed to reestablish the American chestnut (Castenea dentata), which dominated Appalachian forests for many years until a deadly fungal infection all but wiped out the tree in the last century. It was a key species that supported significant biological diversity; reviving the chestnut could yield the same environmental benefits today. What hurdles remain in the way of success, and when will scientists actually begin planting these biotech trees?

We’ve all taken Tylenol for minor aches and pains without a second thought. It’s one of the only pain relievers women can safely consume during pregnancy, and that, sadly, has made acetaminophen, the active ingredient, a target for opportunistic trial lawyers. All it took was a single study suggesting that children exposed to Tylenol during pregnancy were 19% more likely to have autism spectrum disorders compared to non-exposed children and the litigants were off to the races. The story underscores the fact that our legal system is routinely abused, forced to serve as a venue in which bad science gets an unjustified hearing.

During a January 2 contest against the Cincinnati Bengals, Buffalo Bills safety Damar Hamlin collapsed after tackling a wide receiver. Experts suspect Hamlin went into cardiac as a result of commotio cordis, a rare but potentially fatal condition characterized by a blunt impact to the chest. First responders probably saved Hamlin’s life by administering CPR and automated external defibrillation (AED). He was discharged on January 11, following a nine-day hospital stay stay. Some political commentators capitalized on the situation to attack the mRNA COVID-19 vaccines, though Hamlin’s vaccination status is unknown and his medical history has not been released to the public.

Kevin M. Folta is a professor, keynote speaker and podcast host. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Visit his website and follow ACSH on Twitter @ACSHorg

Evolution of humor: How laughter may have helped early humans survive and thrive

discovery mag e x
Until now, several theories have sought to explain what makes something funny enough to make us laugh. These include transgression (something forbidden), puncturing a sense of arrogance or superiority (mockery), and incongruity – the presence of two incompatible meanings in the same situation.

I decided to review all the available literature on laughter and humour published in English over the last ten years to find out if any other conclusions could be drawn. After looking through more than one hundred papers, my study produced one new possible explanation: laughter is a tool nature may have provided us with to help us survive.

I looked at research papers on theories of humour that provided significant information on three areas: the physical features of laughter, the brain centres related to producing laughter, and the health benefits of laughter. This amounted to more than 150 papers that provided evidence for important features of the conditions that make humans laugh.

By organising all the theories into specific areas, I was able to condense the process of laughter into three main steps: bewilderment, resolution and a potential all-clear signal, as I will explain.

This raises the possibility that laughter may have been preserved by natural selection throughout the past millennia to help humans survive. It could also explain why we are drawn to people who make us laugh.

The evolution of laughter

The incongruity theory is good at explaining humour-driven laughter, but it is not enough. In this case, laughing is not about an all-pervasive sense of things being out of step or incompatible. It’s about finding ourselves in a specific situation that subverts our expectations of normality.

For example, if we see a tiger strolling along a city street, it may appear incongruous, but it is not comic – on the contrary, it would be terrifying. But if the tiger rolls itself along like a ball then it becomes comical.

Animated anti-hero Homer Simpson makes us laugh when he falls from the roof of his house and bounces like a ball, or when he attempts to “strangle” his son Bart, eyes boggling and tongue flapping as if he were made of rubber. These are examples of the human experience shifting into an exaggerated, cartoon version of the world where anything – especially the ridiculous – can happen.

But to be funny, the event must also be perceived as harmless. We laugh because we acknowledge that the tiger or Homer never effectively hurt others, nor are hurt themselves, because essentially their worlds are not real.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

So we can strip back laughter to a three-step process. First, it needs a situation that seems odd and induces a sense of incongruity (bewilderment or panic). Second, the worry or stress the incongruous situation has provoked must be worked out and overcome (resolution). Third, the actual release of laughter acts as an all-clear siren to alert bystanders (relief) that they are safe.

Laughter could well be a signal people have used for millennia to show others that a fight or flight response is not required and that the perceived threat has passed. That’s why laughing is often contagious: it unites us, makes us more sociable, signals the end of fear or worry. Laughter is life affirming.

We can translate this directly to the 1936 film Modern Times, where Charlie Chaplin’s comic tramp character obsessively fixes bolts in a factory like a robot instead of a man. It makes us laugh because we unconsciously want to show others that the disturbing spectacle of a man reduced to a robot is a fiction. He is a human being, not a machine. There is no cause for alarm.

How humour can be effective

Similarly, the joke at the beginning of this article starts with a scene from normal life, then turns into something a little strange and baffling (the woman behaving incongruously), but which we ultimately realise is not serious and actually very comical (the double meaning of the doctor’s response induces relief), triggering laughter.

As I showed in a previous study about the human behaviour of weeping, laughter has a strong importance for the physiology of our body. Like weeping – and chewing, breathing or walking – laughter is a rhythmic behaviour which is a releasing mechanism for the body.

The brain centres that regulate laughter are those which control emotions, fears and anxiety. The release of laughter breaks the stress or tension of a situation and floods the body with relief.

Humour is often used in a hospital setting to help patients in their healing, as clown therapy studies have shown. Humour can also improve blood pressure and immune defences, and help overcome anxiety and depression.

Research examined in my review has also shown that humour is important in teaching, and is used to emphasise concepts and thoughts. Humour relating to course material sustains attention and produces a more relaxed and productive learning environment. In a teaching setting, humour also reduces anxiety, enhances participation and increases motivation.

Love and laughter

Reviewing this data on laughter also permits a hypothesis about why people fall in love with someone because “they make me laugh”. It is not just a matter of being funny. It could be something more complex. If someone else’s laughter provokes ours, then that person is signalling that we can relax, we are safe – and this creates trust.

Laughter creates bonds and intimacy. Credit: Nektarstock/Alamy

If our laughter is triggered by their jokes, it has the effect of making us overcome fears caused by a strange or unfamiliar situation. And if someone’s ability to be funny inspires us to override our fears, we are more drawn to them. That could explain why we adore those who make us laugh.

In contemporary times, of course, we don’t think twice about laughing. We just enjoy it as an uplifting experience and for the sense of well-being it brings. From an evolutionary point of view, this very human behaviour has perhaps fulfilled an important function in terms of danger awareness and self-preservation. Even now, if we have a brush with danger, afterwards we often react with laughter due to a feeling of sheer relief.

Carlo Valerio Bellieni is a Professor of Pediatrics at The Università di Siena. 

A version of this article appeared originally at The Conversation and is posted here with permission. Check out The Conversation on Twitter @ConversationUK 

‘Mania of zero risk’: How environmentalists inflame concerns about farm chemicals, increasing anti-GM food rejectionism and the degradation of waterways

Food Watch warns, wrongly, that trace amounts of mineral oil can get into our food and seriously endanger consumers, calling for “zero tolerance

Mars’ Skittles are under legal assault based on claims, rejected by government reviews in Britain and Canada, that the candy poses “significant health risk to unsuspecting consumers.”

The Environmental Working Group falsely claims that eating Honey Nut Cheerios could kill you because of the presence of trace chemicals in the parts per trillion — claims mimicked by dozens of anti-chemical NGOs.

image
Credit: Mamavation

Chemophobic mania, which inflames consumers worldwide, has become endemic — and despite the well-meaning goals of some environmental groups that spread exaggerated and out-of-context data, create more harm than good.

In recent decades, many industrial societies have become increasingly obsessed with what is called “uncertain risk” — the notion that zero risk is possible and should be the goal of regulators and policy makers. 

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Is that a reasonable expectation grounded in science and risk analysis? 

Key is how each of us calibrates the amount of risk we are willing to endure. Many people embrace the notion that if others want to incur risk in their daily routine, that is their choice, but my own preference is to avoid risk. 

As explained by The Decision, researchers Kip Viscusi, Wesley Magat, and Joel Hubert found that “people were willing to pay up to three times as much to reduce the risk of side effects from 5/15,000 cases to 0/15,000, as they were for a risk reduction from 15/15,000 to 10/15,000, despite the reductions in risk essentially being statistically negligible.” 

What changes between these two options is the individual perceives zero risk (0/15,000) as being far superior to 10/15,000. While both cases face the same reduction of 5/15,0000, with the option of zero risk they would sacrifice significant financial resources to integrate zero risk into their lives.

This ‘zero risk mentality’ has similarities with the NIMBY concept in which people support innovation and growth, as long as it is ‘not in my backyard’. This applies to a wide range of infrastructure, ranging from recycling facilities to increasing social housing, or building wind farms as many communities, particularly in California, have rejected them yet overwhelming support ‘green energy’. 

image
Credit: Renewable Energy Magazine

While industry and science have been very successful at reducing the rates or incidences of risk, risks are never completely or fully removed from our daily lives. Routine things, such as driving, have higher risk probabilities. If we drive daily, the probability of being in a traffic accident increases, which would be a medium to high risk. In comparison to the probability of being struck by a meteor, the risk is statistically close to zero, but can never be zero, as there is a chance it could occur. The risk of being hit by a meteor is 1 in 840,000,000. As the global population has just passed 8 billion, today 9.5 people are at risk of being struck by a meteor.

Success in the reduction of risks has increased life expectancy by more than 30 years between 1900 and 2013. Certainly, part of this increase in life expectancy is due to innovations in medicine and health care, as well as the improvements in food and water safety have strong contributions.    

Conservationists demand zero risk while promoting policies that increase it

Water quality and purity are important and concerning topics for everyone. No one wants to learn about potential contaminants in water samples of their local community water sources. However, this happens from time to time. To ensure that chemicals don’t end up in watersheds, strict regulations have been enacted. In the USA, the first water quality regulations to address water contamination came into effect in 1948 in the Water Pollution Control Act. Significant amendments were made in 1972 following the creation of the Environmental Protection Agency in 1970, resulting in the implementation of the Clean Water Act.  

In most instances, the detection of contaminants is well below the level of harm to humans. Occasionally, the rates are high enough that the water supply is turned off until the problem is resolved. Public concern is focused on the presence of the risk, rather than the magnitude. People expect there to be zero contaminants in their water supply. Learning that a contaminant detection is present at a level of a few parts per billion, far below unsafe levels, provides little to no emotional comfort. 

Evaluating risks from pesticides in farming

Water quality testing of some watersheds has confirmed the presence of agricultural chemicals. Agricultural chemicals can run-off of a field if there is a heavy rain within a short period of time following application, and chemicals can wash off plants and into the soil. As well, chemical residues are present in the soil following each application as a portion of the chemical being applied directly enters the soil. Heavy rains also cause soil erosion, which results in the soil and any chemical residues ending up in a watershed. This is unfortunate and the agriculture industry has been working to find solutions that contribute to reducing the run-off of farm chemicals. 

image
Scientists with the U.S. Geological Survey sample water in Goodwater Creek, Mo., for pesticides and other chemicals that may have run off from the surrounding land. Credit: Abbie Fentress Swanson/Harvest Public Media

One agricultural crop that previously had problems with chemical run-off was potato production. With their frequent insecticide applications, heavy rains afterwards led to reports of ‘fish kills’ in nearby waterways. In the instance of potato production, awareness and innovation have improved the situation such that fewer instances of dead fish are now being reported.

Recent research from Wisconsin has identified the promising finding of reduced chemical detection in watersheds. Through surveying farmers, researchers discovered that regulations that restricted the use of one chemical resulted in farmers adopting genetically modified (GM), herbicide tolerant (HT) corn. Commonly, restrictions on the use of one chemical reduce weed control options, potentially leading to an increase in herbicide resistant weeds, as farmers would use the same chemical for weed control year after year. If weed control is ineffective, the result would be a return to tillage. Tilling a field results in higher rates of soil erosion and a higher potential for chemical run-off into watersheds. 

The chemical atrazine was approved in 1958 and has been the main pesticide used in the production of non-herbicide tolerant corn in the USA. It’s a target of some environmental groups which claim it causes cancer. An assessment of atrazine use for corn production in Wisconsin examined what impact atrazine use restrictions had on the range of weed management practices. 

image
Atrazine levels in soils. Credit: University of Florida

A survey of farmers in areas where atrazine restrictions had been implemented and areas that had no restrictions found that restricting the use of atrazine increased the adoption of HT corn varieties tolerant to glyphosate. This then contributed to an increase of conservation tillage practices. The combination of atrazine restrictions and increased HT corn production contributed to a reduction in the different types of herbicides available to farmers to control weeds. They concluded that the reduction in the diversity of weed control options — banning atrazine, for example — leads to an increase in herbicide resistance within weeds, as farmers switched from relying on atrazine to glyphosate. 

There are other likely effects from restrictions or bans. The authors conclude that the regulatory efforts to restrict atrazine in groundwater might have the knock-on effect of leading to more herbicide resistant weeds. Given the reduced chemical options available to control weeds, farmers could choose to control these weeds through tillage. More tillage increases the potential for soil erosion — chemicals transfer from fields to the watershed through erosion — which results in a deterioration in water quality. The study found that atrazine restrictions led to greater adoption of genetically modified herbicide resistant corn. This reduced the use of tillage, which limited soil erosion, resulting in lower levels of chemical detection in local watersheds.

What can we learn from this study?

The study highlights the tradeoffs that exist between food production and environmental impacts. Many consumers and environmental organizations embrace a contradiction: they are not in favor of GM crops, yet are in favor of reducing agricultural chemicals, as well as their presence in watersheds (ironically, they reject the best way to achieve that — GM crops). The research quantifies the connection between the two extremes. Their conclusion challenges the common wisdom. The adoption of GM corn reduces soil erosion and chemical residues in watersheds because fields needed less tilling. 

The dilemma arises from the competing desires for zero risk. Consumers and environmental organizations may believe there is a lot of risk from GM crops, even though numerous studies indicate they pose no unique health or safety threat; they claim to demand ‘zero risk’, which of course is impossible whether using organic or conventional chemicals —so they support banning GM crop production. From a sustainability perspective, that would perilous. Doing this would actually increase the use of tillage, leading to greater soil erosion and levels of chemicals in watersheds. 

GM opponents face a dilemma: you cannot expect to dramatically limit the presence of chemicals in watersheds if you ban the only scientifically-accepted way to achieve that. GM crops and chemicals in watersheds cannot both be zero at the same time. If we prohibit GM crops, then chemicals in watersheds will be higher; by allowing GM crops, chemical detection in watersheds should diminish. 

Zero risk GM opponents have backed themselves into a corner. They continue to insist on no GM crops and almost no chemical presence in watersheds — which is impossible. The ‘zero risk concept’ has been a central tenet of more extreme environmental groups, such as Pesticide Action Network, Environmental Working Group or Center for Food Safety. Although these groups are demanding scientifically impossible standards, their views have become mainstream. Much of society believes that simultaneously accomplishing both objectives is possible. 

Risks need to be assessed and choices made that reflect a careful cost-benefit analysis. That would allow for more informed choices and improved risk tradeoffs. There is good news about the total use of pesticides. As the journal Nature Communications has documented, pesticide use by toxicity and volume on most crops has been going down for decades — even as environmental groups distort the issue by promoting that use by volume — a far less meaningful fact — is increasing. That’s deceptive.

image
Source: Nature Communications

Organizations and governments have an obligation to correct misinformation regarding the impossibility of achieving zero risk. If societies continue to believe that zero risk is feasible, this will result in the loss of safe, beneficial technologies, leading to higher economic and environmental costs. The pursuit of zero risk, leads to worse outcomes than would exist with minimal and safe levels of risk. 

Stuart Smyth is an associate professor at University of Saskatchewan in the College of Agriculture and Bioresources. Stuart is also the Agri-Food Innovation & Sustainability Enhancement Chair at his school and writes about regulations, gene modification and supply chains. You can follow Stuart on Twitter @stuartsmyth66

Viewpoint: Is the FDA following ‘sound science’ in green lighting new Alzheimer’s drug Aduhelm?

The prominent economist Milton Friedman said that in order to understand the motivation of a person or organization, you must look for the self-interest. So where do regulators‘ interests lie? Not always in the public interest, alas, but in increased responsibilities, budgets, bureaucratic empires, and avoiding bad decisions.

But what constitutes a bad decision? As a recent congressional investigation reveals, that depends.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

House Dems fault FDA’s ‘atypical’ review process for Biogen’s Alzheimer’s drug

Regulators can err by permitting something bad to happen (approving a harmful product, a Type I error) or by preventing something good from becoming available (delaying or failing to approve a beneficial product, a Type II error). The two types of errors are opposing sides of the same coin: Too‐assiduous reduction of the incidence of Type I errors typically results in an increase in the incidence of Type II errors.

Both outcomes are bad for the public, but the consequences for the regulator are very different. Type I errors are highly visible and have immediate consequences: The developers of the product and the regulators who allowed it to be marketed are excoriated and punished in such modern‐day pillories as congressional hearings, news shows, and newspaper editorials.

A Type II error, by contrast, is almost always a non-event. Not surprisingly, then, regulators often make decisions defensively — in other words, to avoid Type I errors at any cost.

rachnovblog
Credit: Statistics Solutions

That is why the recently released results of an investigation by two House of Representatives committees into the Food and Drug Administration’s approval of Aduhelm, a drug to treat Alzheimer’s disease, are perplexing. It found that despite significant uncertainty about whether the expensive drug worked to slow or reverse patients’ symptoms, the FDA’s process for approving it was “rife with irregularities.” It concluded that the agency’s actions “raise serious concerns about FDA’s lapses in protocol.”

The report illustrates that the FDA is having problems threading the needle between Type I and Type II errors. As a longtime veteran of the agency and the author of a favorably received book about it, I can bring some insight into the issue. Here’s the bottom line: The FDA is not following the science.

I spent 15 years as the FDA’s “biotechnology czar” at a time when many of the biopharmaceutical companies were small startups needing guidance as they negotiated the regulatory maze. However, the agency’s recent involvement with biotechnology company Biogen, the manufacturer of Aduhelm, went far beyond “guidance” and was highly unusual in many ways.

At the FDA’s urging, the drug was resurrected three months after the company had canceled clinical trials because the drug appeared not to work. Subsequent developments over the next year were marked by at least 115 meetings, calls, and email exchanges between the company and the FDA, according to the report from the Oversight and Reform and Energy and Commerce committees.

Regulators assumed a major role in rebooting the company’s narrative and the effort to get the drug approved, even though there were substantial reservations about the effectiveness of the drug among several factions within the FDA, especially its statisticians, and even within Biogen.

Significantly, there was also strong opposition to the approval from the FDA’s external advisory committee. One critical concern was whether the endpoint of the clinical trials was meaningful. The justification for accelerated approval was that in Alzheimer’s patients, Aduhelm targets and reduces the levels of a protein, amyloid, that forms plaques — abnormal clumps of protein that collect within the brain. But there were no conclusive data showing that a reduction of amyloid slowed cognitive decline, the ultimate goal of any treatment.

Credit: CNN

If the drug doesn’t actually improve the dreaded symptoms of the disease, what good is it?

Another anomaly was that when the FDA granted “accelerated approval,” which requires a post-approval confirmatory trial, regulators gave Biogen an unprecedented eight years to complete it. During that time, the drug could be prescribed to the nation’s 6.5 million Alzheimer’s patients.

Finally, the approval granted was for all Alzheimer’s patients, although Aduhelm had only been tested in patients with mild-to-moderate disease. That raised the possibility that the drug would be administered to many people in whom it would not work, an important consideration given that Biogen intended to charge $56,000 per year per patient. That is several times more than other Alzheimer’s drugs, and it would be a huge hit to Medicare’s piggy bank — more than $300 billion a year for a possibly worthless drug!

What’s behind this dysfunction? It could be that a few high-ranking FDA officials have a personal obsession with treating Alzheimer’s disease that caused them to ignore the evidence and push the approval. Whatever the reason, I suspect we haven’t heard the last of the Aduhelm saga.

Henry I. Miller, a physician and molecular biologist, is the Glenn Swogger distinguished fellow at the American Council on Science and Health. A 15-year veteran of the FDA, he was the founding director of its Office of Biotechnology. Find Henry on Twitter @henryimiller

A version of this article was originally posted at the Washington Examiner and has been reposted here with permission. The Washington Examiner can be found on Twitter @dcexaminer

‘Free to fabricate’ or ‘barred from teaching’? Discord over COVID underscores threats to academic freedom — and the public

Two scientists. Two prominent institutions. 

One is a tenured professor running a microbial research laboratory where she investigates mechanisms of antibiotic resistance. During the COVID pandemic she lent her expertise to inform the public about the virus and mitigation efforts. She used comics, media interviews, and humor. She spoke with a jovial kindness that connected, and she was creating needed change in a crisis. Her science communication efforts earned her the distinction as New Zealander of the Year in 2020. 

The other is a staff scientist in a computational artificial intelligence lab at the Massachusetts Institute of Technology. For a decade she has emerged as a notorious ideologue, leveling alarmist claims about genetically engineered crops, promoting an indisputable link between glyphosate and autism, and crowing the dangers of vaccination.

The first scientist represents a clear consensus, a body of evidence that grows daily. 

The second one presents her hunches, rangy hypotheses borne of yarn-and-stickpins-on-a-corkboard cherry picking ventures that congeal as controversial opinion articles in low-impact journals, YouTube videos, and painfully cranky books. 

One has been banned from public discourse. The other finds a larger audience. Can you guess which is which? 

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Sharing academic freedom?

Both scientists share an umbrella, the broad promises of Academic Freedom, a covenant that allows and encourages scholars to share their expertise, to “accept a role as critic and conscious of society”, without reprisal. But the way they have been treated by their respective universities is telling, illustrating the threats faced by science communicators who take their public responsibilities seriously. Dr. Siouxsie Wiles (left) has been a valuable voice in connecting science to the public during the pandemic. Yet, she has faced tremendous harassment for her efforts and was silenced by her institution.

 In contrast, Dr. Stephanie Seneff (right) visibly speculates on the “dangers” of the COVID-19 vaccines and constructs tenuous links to disease—the kind of demonstrably false claims that are regularly removed from Twitter and Facebook when made by others. [Read GLP Profile of Stephanie Seneff] Yet, her campaign to discredit science continues, and is magnified by prime-time media outlets who are giving her a critic-free platform. 

Last Sunday on Fox News she stated that through her “research” (she does not perform clinical, medical research) she has discovered a confident connection between the COVID19 vaccine and neurological disease, particularly Parkinson’s Disease. She speaks with a credible-sounding word salad of technical terms that clearly impressed the host while it misinformed a substantial viewership, on a network that has promoted vaccine skepticism. 

Seneff falsely links COVID vaccinations to Parkinson’s Disease on The Ingraham Angle, claiming that “repeated boosters will be devastating in the long term” and parents “should do everything they can to avoid (the vaccine)”

Wiles is a microbiologist, an Associate Professor of Medical Sciences at the University of Auckland. She wears a lab coat, but also a pair of Doc Marten combat boots and a wild mane of hot-pink hair. She emerged as the trusted, expert, go-to source for media and public audiences concerning COVID, vaccinations and quarantines. Wiles has consistently represented the best science as the pandemic evolved. She connects through effective online animations, through articles in popular press, through interviews and stories. 

But as Wiles elevated the conversation, her efforts were not appreciated by all. Angry anti-vaccination groups, COVID19 denialists, and others opposed to the science or policy took to social media, hammering Wiles and her institution with two solid years of harassment. Threatening emails, doxing, and even physical confrontation were the price of teaching science, and it was daily and intense. 

Wiles did the right thing. She stepped up when most academics remained quiet. She changed minds. She used creative media and her trusted platform as a public scientist to present the latest truths that empirical research gave us. 

In response, the University of Auckland did nothing to insulate Wiles from harassment, except to tell her to remove herself from the conversation. She was instructed to step out of social media, to take paid leave, and follow the guidance of advisors to “not require” public commentary. 

Science silenced

While the highly credible Wiles is being shut down, Seneff is peddling dangerous nonsense of all kinds  She entertains audiences at questionable conferences, writes books about the deadly dangers of low-toxicity agricultural chemistry, and is featured prominently in overnight conspiratorial media (e.g. Infowars, Coast-to-Coast AM).

She is particularly dangerous because she wields the credibility badge of an “MIT Senior Research Scientist” to promote her speculative views, such as vaccines cause harm, an innocuous herbicide will have 50% of children autistic by 2025, and the same herbicide was the biological basis of school shootings and the Boston Marathon Bombings in 2013.

During the pandemic she has claimed that the herbicide glyphosate was causing COVID symptoms because it was used on corn, which was processed into ethanol, added to automobile fuel, converted to exhaust, and then breathed in by humans. Classic. 

Seneff has published claims strongly implying that glyphosate exposure is the causal factor of chronic disease, such as cancer, heart disease, obesity, asthma, Celiac disease, infertility, Alzheimer’s Disease and diabetes, among, well, pretty much every disease someone can die from. She consistently and strategically blurs the line between correlation and causation. Her speculation is so wild and rampant that two authors typically critical of agricultural technology have published articles correctly citing Seneff’s speculation-based certainty as “misrepresentation”, “failed logic” and “syllogism fallacies”. Her articles in scholarly press are highly criticized, and in one case was anointed with a note of concern from the publisher.  

One of her truly execrable books is being touted here on vaccine rejectionist Robert F Kennedy, Jr.’s disinformation website by alternative supplement peddler, osteopath Joseph Mercola, who the New York Times called “the most influential spreader of coronavirus misinformation online.” [Read GLP Joseph Mercola profile]

Freedom or freedumb?

The range of response from Seneff and Wiles’ respective institutions frames the dangerous paradox. While the one speaking from expertise is told to shut up and take paid leave, the one pitching baseless hypotheses is free to flip failed theories on prime-time television. The one seeking to end the pandemic through sober scientific conversation is unprotected and silenced, the one seeking to advance an anti-agriculture, anti-vaccination agenda pours gas on a fire of anti-scientific dissent, unbridled.

So where is the real line? The contrasting pandemic stories delineate the extremes of how Academic Freedom is interpreted and enforced. Clearly, the evidence base of each scholarly position is not the deciding factor. So why are scientifically rigorous efforts shut down, while a flighty soup of speculations that harms public health efforts is free to flow? 

The answer is simple: The vocal minority of the anti-vaccination, anti-GMO, anti-COVID vaccine crowd demands that front-line scientists be reined in, that universities silence the scientists that speak the truth. Scientists are attacked and defamed in social media and activist websites. Universities are pounded with onerous, expensive public records requests, trying to blow the lid off the conspiracy, attempting to discover what would possibly motivate a scientist to teach science? Universities, risk averse and wanting to avoid controversy, too often bend to the pressure, asking (or in some cases demanding) that faculty stay out of the conversation. 

On the other hand, the scientifically enlightened defer to the self-policing nature of science. It’s presumed that crank opinions will fade, that limp claims will not influence policy, and hard core published reproducible evidence from rigorous experimentation will win the day. The President of MIT’s phone isn’t ringing. 

Flaky freedom and the public trust

False or misleading information around COVID-19 extends a pandemic, destroys public trust and impedes public health efforts. Disinformation has a body count. Still, visible, bias-confirming, dangerous speculation is permitted to flourish under the guise of academic freedom. The internet ensures that it is prominent, ubiquitous, and everlasting. 

The solutions? 

Academic institutions must understand their valuable potential to lend clarity to contentious conversations in policy and science, and the power of academic freedom to help, or to harm, public good. Academic institutions need to embrace their positions as the representatives of the evidence. They need to promote their role as rigorous interpreters of reproducible data, checking and acknowledging implicit biases, and distilling the strengths and limitations of concerns or claims. They need to elevate the faculty that are willing to take on this important role, recognize them, promote their efforts, and run into controversy like a fire fighter runs into a burning building.

On the other hand, they must decide when to limit of agenda-driven conjecture that runs counter to empirical evidence, when not stated clearly as hypothesis. Opinions not bolstered by evidence or at least a plausible hypothesis must be questioned, perhaps reigned in. Misinforming the public during a pandemic has a cost, especially when amplified through powerful media. Institutions need to guard against that. 

In both cases academic freedom is being abused to the detriment of science communication, public understanding, and most of all, trust in our academic and medical institutions. Both cases frame the danger of not telling the truth, of not defending evidence-based positions, and the impact information can have on public perception of science.  

Kevin Folta is a professor, communications consultant and speaker. He hosts the Talking Biotech and GLP’s Science Facts and Fallacies podcasts. Views are presented independent from his roles at the University of Florida. @Kevin Folta

This article first appeared on the GLP on January 18, 2022.

glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists