Viewpoint: It’s time to end the innovation-blocking, organic lobby-promoted, biotechnology-regulating Cartagena Protocol

The Cartagena Protocol on Biosafety (CPB) is an international agreement developed by governments and environmental organizations opposed to the commercialization of genetically (GM) modified crops and agricultural biotechnology. Drafted in 2000, the CPB came into effect in 2003 after being ratified by 50 countries, as was required to formalize the agreement. The CPB is a sub-agreement of the United Nations Convention on Biological Diversity. Presently, 90 countries have ratified the CPB, however, the major GM crop producing countries are not signatories to this agreement, including Argentina, Australia, Canada, and the United States.

The scope of the CBP is provided in Article 4, which states, “[t]his Protocol shall apply to the transboundary movement, transit, handling and use of all living modified organisms that may have adverse effects on the conservation and sustainable use of biological diversity, taking also into account risks to human health.” This scope outlines how narrowly focused the CPB is designed to be, as it does not focus on other aspects of agriculture that have adverse effects on the environment, such as organic farming or monoculture cropping. By narrowly focusing on international trade, the object of the CPB was that it would be able to serve as justification for countries to reject GM crop production of commodities that are commonly exported, such as corn or soy, and to reject GM crop imports.

An obsolete protocol thanks to evidence

The leading reason the CPB is an obsolete agreement is that the 25 years of safe, GM crop production and science has refuted the claims and fears made by environmental activist organizations. In the past 30 years, there have been 4,485 risk assessments conducted by federal regulatory agencies in over 70 countries, none of which found any difference in risk between GM crops and conventional, non-GM crops. The environmental benefits of GM crops are confirmed by the quantification of 775 million kg fewer pesticide applications during the period from 1996 to 2019. GM crops require fewer chemical applications, resulting in GM crops having a reduced environmental impact by 18%. The human health benefits are additionally estimated at a potential 100 million fewer cases of pesticide poisoning from chemical applications. Further to the human health benefits, research has confirmed that GM corn reduces cancer-causing mycotoxins by 30%.

Aspergillus flavus fungus. Credit: Agronomag

Evidence confirms the environmental costs of not adopting GM crops. Australia approved GM canola in 2003, however, a moratorium on production was implemented in 2004. These moratoriums began to be lifted beginning in 2008 and an assessment of the foregone environmental benefits identified an additional 6.5 million kg of chemical active ingredients applied to canola land. In addition, the additional chemicals resulted in a 14.3% increase in environmental impact to farmers, consumers and the ecology, burned 8.7 million litres of diesel fuel and released an additional 24.2 million kg of greenhouse gas (GHG) and compound emissions. An assessment of agriculture in the European Union quantifies the dramatic costs of their refusal to adopt GM crops as EU agriculture needlessly releases 33 million tonnes of GHGs, compared to the scenario of GM crop adoption. This suggests that the EU’s approach to GM crops is grounded more in politics and perception than scientific evidence.

CPB impacts on improving food security and mitigating climate change

By imposing needless regulatory barriers on GM crops, the CPB stands as a substantial impediment to helping mitigate climate change and reducing food insecurity. Since 1996, GM crops have increased soybean yields by 278 million tonnes and 498 million tonnes of corn. Numerous food insecure African countries have commercialized GM crops in recent years, having confirmed that GM crops benefit the environment and can contribute to improving food security. The CPB acts as a substantial barrier to innovation as it is frequently linked with concepts of strict liability, whereby the onus for zero risk is placed on agricultural biotechnology innovative products. Zero risk is a mythical concept as every product and human action has risk associated with it. Food choices, travel, drugs and even exercise, all have risks related to them.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

The number of people who are food insecure has increased in recent years, further compounded by the Covid pandemic and its effects on food supply chains. At the end of the day, GM crops increase yields. As climactic changes further impact crop and food production, all technologies capable of reducing food insecurity need to be freely available to those wanting access. In the 25 years of safe GM crop production, no credible, peer-reviewed evidence has indicated adverse effects, which confirms how misguided the intentions of the CPB were by the original drafting parties. The CPB is an agreement that is not grounded in scientific data or evidence but rather is based on politically motivated misperceptions. If significant improvements in global food security are to be achieved in the coming decade, the barriers imposed by the CPB need to be removed. The time for the CPB to be retired as an international agreement is long overdue.

Stuart J. Smyth is a professor in the Department of Agricultural and Resource Economics and holds the Industry Funded Research Chair in Agri-Food Innovation at the University of Saskatchewan. Follow him on Twitter @stuartsmyth66

A version of this article was originally posted at SAIFood and has been reposted here with permission. SAIFood can be found on Twitter @SAIFood_blog

Viewpoint: Zantac BS—Sanofi’s marketing sleight-of-hand in ‘reformulating’ its Zantac acid-reducer deceives consumers

Back in 2019, I wrote about some of the tricks that drug companies use to hang onto brand name sales even though the drug’s generic equivalent may have been around for years, if not decades (See The Insane World Of Online Decongestant Commerce).

For example, Sanofi really wants you to buy the Allegra brand, even though the generic equivalent fexofenadine sits right next to it on pharmacy shelves. The solution? Sell Allegra with small changes and voila! Three “different” products: Allegra Allergy 12 hr, Allegra-D (Allegra-D with phenylephrine, a useless decongestant added), and Allegra-D 24 (extended-release formulation with phenylephrine, the same useless decongestant added).

This is nothing compared to what McNeil Consumer Healthcare (part of Johnson & Johnson) has pulled off with Sudafed. Of the 6 brand-name Sudafed products I could find, only one contained pseudoephedrine – the drug that makes the stuff work at all (1). Here is a brief summary of the differences between pseudoephedrine (old Sudafed) and phenylephrine (“new” Sudafed)

Evidence for the pharmaceutical superiority of pseudoephedrine.

Yet that didn’t stop McNeil from pulling this sleight of hand…

Six “Sudafed” products, only one of which contains the original decongestant, pseudoephedrine. The others have various other drugs for coughs and headaches mixed in, but no pseudoephedrine.

It would seem that Sanofi is at it again. The company has come out with a new “formulation” of the wildly successful stomach acid reducer Zantac, the world’s first billion-dollar drug. It’s a brand name worth hanging onto, but Sanofi had to pull a bait-and-switch is to do so. Here’s why.

Would you care for a Zantac?

First, the name Zantac sort of implies that you are buying…perhaps…Zantac, no? No. The drug in real Zantac was called ranitidine. But real Zantac-ranitidine ran into all kinds of trouble because it contained a carcinogenic impurity called N-nitrosodimethylamine (NMDA). NMDA, which is also a seriously bad liver toxin, was found in Zantac products worldwide. Worse still, the amount of NMDA increased with temperature and time, leading to its recall in 2020 (2). Now Sanofi is selling famotidine, the generic name of the drug in Pepcid, as Zantac 360o. What does this name even mean? Beats me. I would have preferred something a little more honest, like “The Drug Formerly Known as Pepcid But Will Not Give You Cancer,” but this is admittedly not all that catchy.

Label “Lies”

Maybe even worse is all the nonsense on the box, which is plenty misleading, probably intentionally so. The 360o remains a mystery. Was it really hot in the factory where the drug was made? Or is the pill spinning around in circles? The rest of the stuff is pure salesmanship. The term “new formula” is incorrect. It is not a new formula. It is a different drug, not the same one with cherry flavor added. That would be a “different formula.” Then we have “Original Strength” and “Maximum Strength,” which Sanofi deems necessary because people apparently cannot multiple 10 milligrams by 2 and come up with 20 milligrams. Do we really need both of these taking up shelf space? Or is Sanofi trying to imply that the “Maximum Strength” is something special rather than two regular pills?

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Finally, we have “Now With Famotidine,” as if some special new ingredient has been added to the original Zantac to make it even better! Nah, Sanofi just changed the name and did so in an intentionally confusing manner. Maybe they should have just gone with this…

It’s enough to give you heartburn (3).


(1) Sudafed is now sold behind the counter, and ID is required to purchase it. There are also limits on the amount of drug that can be bought. This is because pseudoephedrine, the active decongestant, can be used to synthesize methamphetamine. Sudafed PE contains phenylephrine, which cannot be converted to methamphetamine. It is sold without restrictions.

(2) Zantac was not the only drug to be contaminated with NDMA. The blood pressure drug Losartan and two others in the class contained the impurity as did the diabetes drug metformin.

(3) This is probably the 15th (or more) article I’ve written condemning certain practices of drug companies. Am I (or ACSH) “pharma shills” like some fools claim? You tell me.

Dr. Josh Bloom is Executive Vice President of the American Council on Science and Health. He has published more than 60 op-eds in numerous periodicals, including The Wall Street Journal, Forbes, and New Scientist. Follow him on Twitter @JoshBloomACSH

A version of this article was originally posted at the American Council on Science and Health website and has been reposted here with permission. The American Council on Science and Health can be found on Twitter @ACSHorg

Viewpoint: Is Bill Gates the point man for ‘taking over’ the world’s food supply in service of Big Agriculture?

Last month, comedian Russell Brand gave his YouTube followers a 17-minute lecture about Bill Gates’ plot to take over the world’s food supply and force-feed the developing world genetically engineered (GE) crops. Relying mostly on anti-GMO superstar Vandana Shiva, Brand unsurprisingly got just about everything wrong.

screenshot pm

[Recently,] U.S. Right to Know’s (USRTK) co-founder and managing editor Stacy Malkan made more or less the same argument Brand did in a long story, “Bill Gates has radical plans to change our food. What’s on the menu?” posted on the anti-GMO group’s website. [1] The only difference between the two is that Malkan, a seasoned organic industry PR expert, knows the subject better and told a more convincing story. Because the arguments Malkan employed circulate widely on the internet, they’re worth refuting in some detail. Let’s take a look at a few of her key points.

The “techno-food industrialists”

Much like a good thriller movie, there’s always easy-to-spot villains in USRTK’s narratives; in years past, the nemesis has been Coca Cola, Monsanto and even the entire food industry. This time it was Gates, investing his billions in technologies that can fight hunger and climate change:

To the techno-food industrialists, hunger and climate change are problems to be solved with data and engineering. The core ingredients of their revolutionary plan: genetic engineering — and patenting — of everything from seeds and food animals, to microbes in the soil, to the processes we use to make food. Local food cultures and traditional diets could fade away as food production moves indoors to labs that cultivate fake meat and ultra-processed foods.”

Data and engineering solve lots of problems. For example, Malkan wrote her article on a computer made from patent-protected technology instead of a typewriter. This allowed her to reach a mass audience with a few keystrokes. Why should we be mortified of Gates’ intellectual property (IP) claims on food technology but not Dell’s patents on its laptops?

Rhetorical questions aside, Malkan never actually explained why patents are harmful. There are some interesting arguments against intellectual property, but they don’t have anything to do with USRTK’s contention that the technology IP protects is inherently harmful.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

The anti-GMO movement has alleged for decades that GE crops destroy local food cultures, but this was always a myth. Different regions of the world utilize genetic engineering to enhance their local cuisine. Hawaii grows virus-resistant papaya; Bangladesh produces insect-resistant eggplant; Nigeria cultivates cowpea with the same trait. African researchers are using gene editing to rescue the continent’s staple crops, to the benefit of millions of people.

All this work is done because farmers and consumers need solutions to practical problems. Nobody forces developing nations to use this technology.

Ginkgo Bioworks, a Gates-backed start-up that makes “custom organisms,” just went public in a $17.5 billion deal. The company uses its “cell programming” technology to genetically engineer flavors and scents into commercial strains of engineered yeast and bacteria to create “natural” ingredients, including vitamins, amino acids, enzymes and flavors for ultra-processed foods.

This isn’t a bad thing. If it weren’t for synthetic biology, we’d have to use older industrial techniques to make these products or grow the plants that produce their ingredients. Either approach takes a toll on the environment. Depending on where you live, there’s a good chance some organic products in your grocery store contain one or more of these fermented ingredients, by the way.

Biotech’s eco-benefits

Gates believes that “genetically modified seeds and chemical herbicides, in the right doses – and not land-intensive organic farming – are crucial to curbing carbon emissions.”

He’s right. Research over the last two decades has shown that GE crops and the weedkiller glyphosate—USRTK’s favorite bogeyman—have substantially reduced greenhouse gas (GHG) emissions and halted the expansion of farmland globally. I stress again: these gains would not have been possible had farmers refused to utilize these products. As always, the customer is king in economics. This is why activist groups try to ban pesticides and biotech products via the court system; they can’t convince growers to give up useful tools.

Recent science shows that chemical-intensive industrial agriculture is a key driver of climate change, soil erosion and the worldwide decline of insects.

Wrongwrong and wrong. Relative to energy production and transportation, agriculture’s contribution to climate change is quite small. Moreover, the very technologies Malkan attacked help solve these problems, and it’s easy to see why. Chemicals that reduce tillage slash carbon emissions and erosion; technologies that allow greater production on less land preserve biodiversity. These are real issues; there are no magic bullets. But demonizing modern agriculture is no solution.

“Unpredictable” gene editing

Gene-editing techniques, and especially CRISPR, are efficient but unpredictable. Studies show the CRISPR process can create unexpected mutations including DNA damage and other off-target effects.

Compared to what? Traditional plant and animal breeding are far less precise and predictable than gene editing, according to a team of geneticists writing in Nature in 2019:

To put this in perspective, one study of whole genome sequence data from 2703 individual cattle in the 1000 Bull Genomes Project revealed more than 86.5 million differences (variants) between different breeds of cattle. These variants included 2.5 million insertions and deletions of one, or more, base pairs of DNA, and 84 million single nucleotide variants, where one of the four nucleotides making up DNA (A, C, G, T) had been changed to a different one.

Bottom line: every plant and animal we consume has mutated DNA. Gene editing just gives us more control over how many and where they show up in the organism’s genome.

Intellectual yet idiot

The anti-GMO movement has a small roster of intellectuals that lends the appearance of credibility to its arguments. Nassim Taleb is one of the all-stars. Why does his opinion matter? According to Malkan:

One of the world’s foremost experts on probability and uncertainty, Nassim Taleb, considered that question — What could go wrong with GMOs? — for a 2014 paper … The authors analyzed GMOs in the context of what they called a “non-naive” view of the Precautionary Principle. They concluded: “GMOs represent a public risk of global harm” and should be subject to “severe limits.”

Taleb has a lot of interesting insights to offer; I highly recommend his books. But on biotechnology, he is a perfect example of his “intellectual yet idiot” (IYI) concept in action. “The IYI pathologizes others for doing things he doesn’t understand without ever realizing it is his understanding that may be limited,” he wrote in Skin In The Game.

Credit: Know Yourself Performance

Given his ignorance of the subject, it’s no surprise that Taleb was roasted by actual experts for publishing that paper. Among their criticisms was the fact that “GMO” is a nebulous category. Trying to classify and evaluate a wide variety of products based on a meaningless definition tells us nothing about the safety of any of them.

This is why regulators perform case-by-case risk assessments of new crop traits, as philosopher of science Giovanni Tagliabue explained in response to Taleb’s “study.” Malkan’s unending urge to obsess over the processes that produced those traits just reveals her ignorance of the relevant science.

That’s ultimately why her story about Bill Gates’ “radical plans” for our food system is baseless. And it helpfully explains why USRTK and their fellow anti-GMO crusaders are on the fast track to obscurity.


[1] I’m not linking to USRTK, sorry. Here’s a profile of the group.

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Follow him on Twitter @camjenglish

A version of this article was originally posted at the American Council on Science and Health and has been reposted here with permission. The ACSH can be found on Twitter @ACSHorg

X-ray vision to peer into the rubble of collapsed buildings or check for booby traps? The science of the future is now

Within seconds after reaching a city, earthquakes can cause immense destruction: Houses crumble, high-rises turn to rubble, people and animals are buried in the debris.

In the immediate aftermath of such carnage, emergency personnel desperately search for any sign of life in what used to be a home or office. Often, however, they find that they were digging in the wrong pile of rubble, and precious time has passed.

Imagine if rescuers could see through the debris to spot survivors under the rubble, measure their vital signs and even generate images of the victims. This is rapidly becoming possible using see-through-wall radar technology. Early versions of the technology that indicate whether a person is present in a room have been in use for several years, and some can measure vital signs albeit under better conditions than through rubble.

I’m an electrical engineer who researches electromagnetic communication and imaging systems. I and others are using fast computers, new algorithms and radar transceivers that collect large amounts of data to enable something much closer to the X-ray vision of science fiction and comic books. This emerging technology will make it possible to determine how many occupants are present behind a wall or barrier, where they are, what items they might be carrying and, in policing or military uses, even what type of body armor they might be wearing.

These see-through-wall radars will also be able to track individuals’ movements, and heart and respiration rates. The technology could also be used to determine from a distance the entire layout of a building, down to the location of pipes and wires within the walls, and detect hidden weapons and booby traps.

See-through-wall technology has been under development since the Cold War as a way to replace drilling holes through walls for spying. There are a few commercial products on the market today, like Range-R radar, that are used by law enforcement officers to track motion behind walls.

How radar works

Radar stands for radio detection and ranging. Using radio waves, a radar sends a signal that travels at the speed of light. If the signal hits an object like a plane, for example, it is reflected back toward a receiver and an echo is seen in the radar’s screen after a certain time delay. This echo can then be used to estimate the location of the object.

In 1842, Christian Doppler, an Austrian physicist, described a phenomenon now known as the Doppler effect or Doppler shift, where the change in frequency of a signal is related to the speed and direction of the source of the signal. In Doppler’s original case, this was the light from a binary star system. This is similar to the changing pitch of a siren as an emergency vehicle speeds toward you, passes you and then moves away. Doppler radar uses this effect to compare the frequencies of the transmitted and reflected signals to determine the direction and speed of moving objects, like thunderstorms and speeding cars.

Credit: Flypaper

The Doppler effect can be used to detect tiny motions, including heartbeats and chest movement associated with breathing. In these examples, the Doppler radar sends a signal to a human body, and the reflected signal differs based on whether the person is inhaling or exhaling, or even based on the person’s heart rate. This allows the technology to accurately measure these vital signs.

How radar can go through walls

Like cellphones, radars use electromagnetic waves. When a wave hits solid walls like drywall or wood walls, a fraction of it is reflected off the surface. But the rest travels through the wall, especially at relatively low radio frequencies. The transmitted wave can be totally reflected back if it hits a metal object or even a human, because the human body’s high water content makes it highly reflective.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

If the radar’s receiver is sensitive enough – a lot more sensitive than ordinary radar receivers – it can pick up the signals that are reflected back through the wall. Using well-established signal processing techniques, the reflections from static objects like walls and furniture can be filtered out, allowing the signal of interest – like a person’s location – to be isolated.

The key to using radar to track objects on the other side of a wall is having a very sensitive antenna that can pick up the greatly diminished reflected radio waves. Credit: Abdel-Kareem Moadi

Turning data into images

Historically, radar technology has been limited in its ability to aid in disaster management and law enforcement because it hasn’t had sufficient computational power or speed to filter out background noise from complicated environments like foliage or rubble and produce live images.

Today, however, radar sensors can often collect and process large amounts of data – even in harsh environments – and generate high-resolution images of targets. By using sophisticated algorithms, they can display the data in near real-time. This requires fast computer processors to rapidly handle these large amounts of data, and wideband circuits that can rapidly transmit data to improve the images’ resolution.

Recent developments in millimeter wave wireless technology, from 5G to 5G+ and beyond, are likely to help further improve this technology, providing higher-resolution images through order-of-magnitude wider bandwidth. The wireless technology will also speed data processing times because it greatly reduces latency, the time between transmitting and receiving data.

My laboratory is developing fast methods to remotely characterize the electrical characteristics of walls, which help in calibrating the radar waves and optimize the antennas to make the waves more easily pass through the wall and essentially make the wall transparent to the waves. We are also developing the software and hardware system to carry out the radar systems’ big data analyses in near real-time.

This laboratory wall-penetrating radar provides more detail than today’s commercial systems. Credit: Aly Fathy

Better electronics promise portable radars

Radar systems at the low frequencies usually required to see through walls are bulky due to the large size of the antenna. The wavelength of electromagnetic signals corresponds to the size of the antenna. Scientists have been pushing see-through-wall radar technology to higher frequencies in order to build smaller and more portable systems.

In addition to providing a tool for emergency services, law enforcement and the military, the technology could also be used to monitor the elderly and read vital signs of patients with infectious diseases like COVID-19 from outside a hospital room.

One indication of see-through-wall radar’s potential is the U.S. Army’s interest. They’re looking for technology that can create three-dimensional maps of buildings and their occupants in almost real-time. They are even looking for see-through-wall radar that can create images of people’s faces that are accurate enough for facial recognition systems to identify the people behind the wall.

Whether or not researchers can develop see-through-wall radar that’s sensitive enough to distinguish people by their faces, the technology is likely to move well beyond blobs on a screen to give first responders something like superhuman powers.

Aly Fathy is a Professor of Electrical Engineering at the University of Tennessee. Aly conducts research in electromagnetics, antennas, microwave circuits, propagation and ultra wideband systems. 

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

How climate change is disrupting animal ‘food webs’ and what it might mean for the future

It seems like each day scientists report more dire consequences of climate change on animals and plants worldwide. Birds that are migrating later in the year can’t find enough food. Plants are flowering before their insect pollinators hatch. Prey species have less stamina to escape predators. In short, climatic shifts that affect one organism are likely to trigger ripple effects that can disturb the structure and functioning of entire ecosystems.

One component of animal health that largely reflects the surrounding environment is the microbiome, the consortium of microbes now known to aid in food digestion, regulating the immune system and protecting against pathogens. The species of bacteria that make up the microbiome are primarily recruited from the environment. Thus, food webs and other animal interactions that influence environmental bacteria have the potential to shape animals’ microbiomes.

But what happens when climate change disturbs the environment, causing shifts in animals’ microbiomes that prevent the microbes from performing the key functions that animals need to survive and thrive?

I am an ecologist in the laboratory of Gui Becker specializing in tropical research at the intersection of emerging amphibian disease and climate change. Hundreds of amphibians across the global tropics are facing mounting pressures from disease and climate change. And there is growing evidence that environmental stressors are changing animals’ microbiomes, contributing to the challenges they face.

Building an ecosystem

In a recent experiment designed to figure out how the microbiome of tadpoles was influenced by other animal species in the environment, my colleagues and I studied healthy communities of freshwater bacteria, crustaceans and insects from wetland habitats in the Brazilian Atlantic Forest. We focused on their feeding activities – how they filtered water to get their food and broke down dead plant material.

It is well known that these feeding activities are essential for ecosystem functions such as decomposition. But we found that these food webs also served another purpose: They boosted growth of “good” bacterial species in the environment, such as species that fight pathogenic microbes.

As a result, tadpoles sharing the ecosystem with these microorganisms and invertebrates had healthier gut microbiomes. This provided a strong defense against pathogens, compared with tadpoles that weren’t sharing their habitat with diverse networks of organisms.

Our latest work took this research a step farther by testing how a disturbance such as climate warming could influence these food webs that help ensure the health of vertebrate microbiomes in the wild.

Mapping species interactions in diverse ecosystems is difficult under field conditions, where the environment is unpredictable, and replicating experiments to confirm findings is challenging.

To address this problem, we used plants from the bromeliad family to function as mini-ecosystems so that my colleagues and I could study the effects of a warming climate on species interactions in the more controlled conditions of a laboratory.

file g gqj
Tightly whorled leaves of bromeliad plants provide a mini-aquarium for tadpoles, invertebrates and microorganisms. Credit: Sasha Greenspan

Bromeliads are ideal for experimental work on community interactions because they are natural microcosms and their small dimensions allow for us to grow many of them in a small space. Our study sites in Brazil’s tropical rainforests support extremely high densities of bromeliads from ground to canopy, often resembling a Dr. Seussian wonderland.

To recreate natural ecosystems for our experiment, we planted a garden of 60 identical bromeliads outdoors in the shade of a small tropical forest in São Paulo, Brazil. We then allowed the bromeliads to be naturally colonized by invertebrates and microorganisms for three months. Some of the plants were exposed to ambient temperatures, and others were warmed up to six degrees above ambient – with a custom outdoor heating system – to match predicted global climate change trends.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Nearby, we collected our model host species for the experiment – tadpoles of the treefrog species Ololygon perpusilla that breed only in the mini-aquariums created by the leaves of bromeliads.

We then transferred the bromeliads from outdoors into the lab, added a tadpole to the tiny pool of water at the center of each plant and applied the same heating system to simulate warming. After a few weeks, we inventoried the bacterial species in the tadpole intestines as well as the bacteria and invertebrate species living in the bromeliads.

Setup of the experiment with 60 bromeliads and a custom heating system. Credit: Gui Becker

The domino effects of climate change

In this study, published in Nature Climate Change, we found that warming effects on ecological community networks – including environmental bacteria, worms, mosquito larvae and other aquatic invertebrates – compromised tadpole gut flora, leading to reduced growth, which is a proxy for fitness.

The health of tadpole gut microbiomes was specifically linked to changes in the community of aquatic bacteria and invertebrates living alongside tadpoles within the bromeliads. That is, warming supported growth and reproduction of certain species of bacteria and invertebrates and inhibited others, and these environmental changes disturbed the tadpole gut microbiome.

The higher temperatures also led to faster development of filter-feeding mosquito larvae. Our results suggest that higher rates of filter-feeding also altered the species composition of bacteria in the environment in ways that further disturbed the tadpole microbiome.

In fact, tadpole growth – a proxy for the species’ health – was more strongly associated with warming-induced shifts in their gut microbiomes than with direct effects of warming on growth that are expected in cold-blooded animals like tadpoles or effects of warming on the tadpoles’ algal food resources.

Our work demonstrates how global-scale climate change can impact even the smallest levels of biological organization, including the symbiotic bacteria living within the digestive tract of a tiny frog species.

Looking at these processes within the context of an entire ecological community helps widen our perspective on microbiome health under global change.

Studies investigating effects of warming on vertebrate microbiomes typically focus on direct temperature responses of host flora rather than situating hosts within the complex and intertwined communities where they live in the wild.

Our findings support a growing consensus among scientists that, while climate warming is expected to push some animals beyond their thermal thresholds, a far more ubiquitous consequence of warming is that it may trigger an ecological domino effect, disrupting the species interactions that ecosystems need to function properly.

Sasha Greenspan is a Research Associate at the University of Alabama. Find Sasha on Twitter @sashagreenspan

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

Organ transplantation: Challenging ethical questions on race, economics, and the meaning of life

At an international conference on kidney transplantation in 1963, a disagreement broke out about exactly when a patient should be considered dead enough to become an organ donor. One doctor stood up, angrily declaring that he was not “going to just wait around for the medical examiner to declare the patient dead. I’m just going to take the organ.” The sentiment wasn’t as shocking as it sounds; there weren’t any solid criteria for brain death at the time, and many doctors were asking why they should wait on the dead and dying to save the living.

Medicine has long been shadowed by the specter of the resurrection men who dug up and raided recently buried coffins in the dead of night to supply 19th century anatomists with objects for study. The need for grave robbers had largely been obviated by body donation programs when, in 1954, the first successful kidney transplant at the Peter Bent Brigham Hospital in Boston kicked off the race to transplant other organs. And physicians weren’t willing to confine themselves to those like the kidney that the human body has in duplicate. They wanted to transplant the heart. And that, of course, requires a donor who will not survive the surgery. The age of resurrection men might well have been over, but the age of what we might call harvest men had only begun.

First ever kidney transplant. Credit: Brigham and Women’s Hospital

When are you dead enough to donate your organs? It sounds like an easy question to answer, but there is still no single, simply-applied medical definition of that curious and ephemeral moment. Before the advent of life support, death usually resulted from a halt in respiration or a stopped heart. By the mid-1950s, however, artificial respiration was possible through the use of machines that filled the lungs with air, oxygenating the blood and thereby keeping the brain and heart working on. What, then, was a physician to make of a patient with fixed pupils, no reflexes, and no autonomous breathing who still, mechanically, drew breath? Dead? Dead enough? Even with the advent of technology that could detect brain activity, or electroencephalography (EEG), physicians of the ‘50s and ‘60s were in uncharted territory. It wasn’t clear if a person needed EEG activity to be considered alive, and even some patients already determined to be brain-dead still had occasional blips.

The problem went from philosophical to actual when, on Jan. 2, 1968, at a hospital in Cape Town, South Africa, a surgeon named Christiaan Barnard took the heart from a 24-year-old Black man named Clive Haupt and placed it in the chest cavity of Philip Blaiberg, a White dentist with chronic heart disease.

Haupt had been bathing in the sea while on a family picnic when he suddenly suffered a subarachnoid hemorrhage resulting in bleeding in the brain. He was admitted to the hospital and placed in the care of Raymond Hoffenberg, the physician on duty at the time. That night, Hoffenberg had a visit from the hospital’s transplant team, who asked him to confirm that the patient was dead. Hoffenberg refused. Something was still going on inside Haupt’s head, activity Hoffenberg called elicitable neurological reflexes. “What sort of heart are you going to give us?” Hoffenberg recalled the head of surgery asking him, flanked by an eager Barnard. Their patient, Blaiberg, was in greater danger of death with every passing minute. They wanted the heart out as soon as possible, and Hoffenberg, fearful of undermining Barnard, agreed to declare Haupt dead by the next morning.

imago s
Clive Haupt. Credit: Topfoto/United Archives International

With his new heart, Blaiberg lived for 18 more months, became a media sensation, and paved the way for the future harvest of organs from what are called beating heart donors, that is, living bodies with allegedly dead brains. This, despite the fact that no one could yet agree about what brain death really meant.

While newspapers in apartheid South Africa announced Barnard’s surgical success, some journalists pointed out that Haupt’s heart would now be permitted to go places his body could not. News of Barnard’s heart transplant was also met by the Black press in the United States with trepidation. The Afro-American, a weekly newspaper in Baltimore, warned that doctors might start taking the organs of any Black patient, whatever their ailment.

A few months later, that scenario seemed to play out in Virginia when a factory worker named Bruce Tucker suffered a head injury, was declared unclaimed dead, and had his heart removed, all in the space of 24 hours.

Bruce Tucker had his heart removed without his or his family’s consent in one of the first heart transplants in the segregated American South. Credit: Richmond Times-Dispatch

Tucker’s family sued, and the ensuing case established a first legal definition of brain death. The initially skeptical judge was persuaded by testimony from the ad hoc committee at Harvard Medical School, which had been formed to craft a medical definition of brain death that very same year: When a patient falls into a permanent vegetative state — marked by coma, a lack of independent breathing, and “irreversible loss of all functions of the brain” — they are considered brain-dead, even if their heart still beats. And if they are brain-dead, the jury in the Tucker case ultimately decided, they are also harvestable; their organs may be taken, with consent. The Tuckers argued that the doctors did not allow them enough time to respond to inquiries before making their declaration and taking their quarry, but the family ultimately lost in court.

The decision favored the doctors, and by doing so, also offered the first precedent where brain death was death in the eyes of the law. While medicine continues to follow this legal precedent, the field of organ transplantation is struggling with a different kind of ethical tangle today.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Sometimes called the Red Market, the illicit trade of human organs and tissues has become a lucrative business. Kidneys are especially commodifiable, as the donors can live after surgery. In 2010, the company behind St. Augustine’s Hospital in Durban, South Africa, paid out a large settlement in a cash for kidneys scheme; convictions for similar practices were won against doctors at a Kosovo facility around the same time. In 2018, six physicians were arrested in China’s Anhui province for illegally selling organs of accident victims, and an independent tribunal reported to the United Nations Human Rights Council that the Chinese government harvested organs from religious and ethnic minorities on an “industrial scale.”

Much has changed since the first hearts and kidneys were harvested to save lives. Organ transplantation and donation have become safe, effective, and mainstream, a modern miracle we scarcely think of beyond the checkbox on our driver’s license application. What has not changed is the sense of urgency. In 1963, when a doctor stood up to declare he would not wait on a medical examiner’s declaration of death, only a handful of transplants had even been attempted. In 2020, nearly 40,000 transplants were performed in the U.S. alone and more than 100,000 Americans currently have their names on transplant waiting lists. Transplant tourism, whereby ill patients travel to countries with less-stringent regulations to purchase hard-to-source organs, is on the rise.

The ethics are, naturally, complicated. While we justly condemn a traffic in bodies, we are no longer shocked by the removal of organs, even beating hearts, from brain-dead victims. This is medical progress; we can now save lives that once would be forfeit. The medical and legal criteria for brain death may have remained largely consistent since the 1960s, but our cultural expectations have changed. Our medical institutions are held to the highest standards of ethics, but faced with the impending death of a parent or child, how much might we as individuals be willing to overlook?

The harvest continues.

Brandy Schillace, Ph.D., is a historian, author, and editor in chief of BMJ’s Medical Humanities journal. Schillace’s nonfiction books include “Mr. Humble and Dr. Butcher,” “Death’s Summer Coat,” and “Clockwork Futures.” Find Brandy on Twitter @bschillace

A version of this article was originally posted at Undark and has been reposted here with permission. Undark can be found on Twitter @undarkmag

Which provides better long-term protection against coronavirus reinfection: An actual infection or a vaccine?

Two recent studies have confirmed that people previously infected with SARS-CoV-2, the virus that causes COVID-19, can be reinfected with the virus. Interestingly, the two people had different outcomes. The person in Hong Kong showed no symptoms on the second infection, while the case from Reno, Nevada, had more severe disease the second time around. It is therefore unclear if an immune response to SARS-CoV-2 will protect against subsequent reinfection.

Does this mean a vaccine will also fail to protect against the virus? Certainly not. First, it is still unclear how common these reinfections are. More importantly, a fading immune response to natural infection, as seen in the Nevada patient, does not mean we cannot develop a successful, protective vaccine.

Any infection initially activates a non-specific innate immune response, in which white blood cells trigger inflammation. This may be enough to clear the virus. But in more prolonged infections, the adaptive immune system is activated. Here, T and B cells recognise distinct structures (or antigens) derived from the virus. T cells can detect and kill infected cells, while B cells produce antibodies that neutralise the virus.

Credit: IAES

During a primary infection – that is, the first time a person is infected with a particular virus – this adaptive immune response is delayed. It takes a few days before immune cells that recognise the specific pathogen are activated and expanded to control the infection.

Some of these T and B cells, called memory cells, persist long after the infection is resolved. It is these memory cells that are crucial for long-term protection. In a subsequent infection by the same virus, the memory cells get activated rapidly and induce a robust and specific response to block the infection.

A vaccine mimics this primary infection, providing antigens that prime the adaptive immune system and generating memory cells that can be activated rapidly in the event of a real infection. However, as the antigens in the vaccine are derived from weakened or noninfectious material from the virus, there is little risk of severe infection.

A better immune response

Vaccines have other advantages over natural infections. For one, they can be designed to focus the immune system against specific antigens that elicit better responses.

For instance, the human papillomavirus (HPV) vaccine elicits a stronger immune response than infection by the virus itself. One reason for this is that the vaccine contains high concentrations of a viral coat protein, more than what would occur in a natural infection. This triggers strongly neutralising antibodies, making the vaccine very effective at preventing infection.

HPV virus. Credit: MD Anderson

The natural immunity against HPV is especially weak, as the virus uses various tactics to evade the host immune system. Many viruses, including HPV, have proteins that block the immune response or simply lie low to avoid detection. Indeed, a vaccine that provides accessible antigens in the absence of these other proteins may allow us to control the response in a way that a natural infection does not.

The immunogenicity of a vaccine – that is, how effective it is at producing an immune response – can also be fine tuned. Agents called adjuvants typically kick-start the immune response and can enhance vaccine immunogenicity.

Alongside this, the dose and route of administration can be controlled to encourage appropriate immune responses in the right places. Traditionally, vaccines are administered by injection into the muscle, even for respiratory viruses such as measles. In this case, the vaccine generates such a strong response that antibodies and immune cells reach the mucosal surfaces in the nose.

However, the success of the oral polio vaccine in reducing infection and transmission of polio has been attributed to a localised immune response in the gut, where poliovirus replicates. Similarly, delivering the coronavirus vaccine directly to the nose may contribute to a stronger mucosal immunity in the nose and lungs, offering protection at the site of entry.

The oral polio vaccine elicits an immune response in the gut. Credit: Rehan Khan/EPA

Understanding natural immunity is key

A good vaccine that improves upon natural immunity requires us to first understand our natural immune response to the virus. So far, neutralising antibodies against SARS-CoV-2 have been detected up to four months after infection.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Previous studies have suggested that antibodies against related coronaviruses typically last for a couple of years. However, declining antibody levels do not always translate to weakening immune responses. And more promisingly, a recent study found that memory T cells triggered responses against the coronavirus that causes Sars almost two decades after the people were infected.

Of the roughly 320 vaccines being developed against COVID-19, one that favours a strong T cell response may be the key to long-lasting immunity.

Maitreyi Shivkumar is a Senior Lecturer in Molecular Biology in the Leicester School of Pharmacy at De Montfort University. Her research interests are in understanding host-pathogen interactions, with a focus on virology. Find Maitreyi on Twitter @Lambananas

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

Live to 150? That’s what some AI algorithms claim is possible. What does the science say?

We’re obsessed with aging.

In the quest to prolong life while remaining healthy, people have tried everything from turtle soup to owl meat to drinking human blood.

Russian-French microbiologist and Nobel Prize winner Ilya Mechnikov believed that a person could live 150 years with the help of a steady diet of milk cultured with bacteria. (He died at 71.)

Ilya Mechnikov in his laboratory. Credit: Gallica Digital Library

It’s a favorite topic of Hollywood. From Cocoon to Death Becomes Her to Chronos, the quest to extend our limited time on this planet has been a favorite focus of science fiction. Now it’s edging closer to science fact.

A new report in Nature Communications from researchers at artificial intelligence company GERO.AI indeed points to a maximal human lifespan of 150. And they’ve pioneered a metric that might one day pop up on a smartphone to indicate an individual’s state of aging – something more meaningful, in terms of future health, than counting gray hairs or celebrating birthdays.

They call it the “dynamic organism state indicator,” or DOSI score.

Obsession with lost youth

For evidence of the pervasive focus on youthfulness, just look to films over the years. Most feature the young, with a smattering of “older people have fun too” plots, like 2003’s Something’s Gotta Give, with crusty Jack Nicholson enjoying the company of his contemporary Diane Keaton.

Even fewer films are brave enough to juxtapose generations with a hint of romance. In 1971’s Harold and Maude, a man in his early twenties befriends a 79-year-old woman. The reverse occurs in Woody’s Allen’s 1979 film Manhattan, in which his character, age 42, dates high-schooler Mariel Hemingway. Both films are disturbing.

A sci-fi approach turns the tables on aging. In the 1890 novel The Picture of Dorian Gray, by Oscar Wilde, the protagonist, a beautiful man who wishes to stay that way, transfers his signs of aging to a portrait. He remains young. And in 2008’s The Curious Case of Benjamin Button, starring Brad Pitt and based on a short story by F. Scott Fitzergerald, a man ages backwards.

Aging has been reimagined as a loss of resiliency, as opposed to encroaching decrepitude. Stresses that can counter resilience include poor nutrition, not enough sleep, and difficulty recovering from illness or injury.

However we describe aging, it’s enormously complex, a waxing and waning of biochemical activities that drive the biological changes in a living organism over time. But anti-aging products can’t fulfill that promise without a time machine.

Rebranding aging

The researchers at Gero.AI sought a way to summarize the biochemical changes that accompany experiencing the stresses and sicknesses that come with time. They selected routine blood test results as the source of data to track, because characteristics such as hemoglobin levels and red blood cell counts and sizes reflect metabolism and oxidative stress, which underlie effects of aging felt at the organ and whole-body levels.

Gero team. Konstantin Avchaciov, Peter Fedichev, and Olga Burmistrova. Credit: Gero

Artificial intelligence looks for trends in data from an initial large group, then tests to see if the findings are validated in another group. The Gero.AI researchers tracked blood test data initially in more than half a million people who participate in the UK Biobank or the US National Health and Nutrition Examination Survey (NHANES). Those results led to the DOSI score.

Then the investigators applied the values to a third group, about 1,000 people who’d submitted blood samples to a large diagnostic testing laboratory. The correlations held up; that is, DOSI scores aligned with, and predicted, the ability to recover from stress (resilience) as well as the appearance of chronic diseases such as cancer, heart disease, and diabetes.

An example of using DOSI is to look at groups defined by a risk factor that affects lifespan, such as smoking. Average DOSI scores were about the same between people who’d never smoked and those who’d quit, and both were significantly lower than people who continued smoking. Quitting smoking buys healthy time.

“Every person experiences stress such as getting sick, or not having enough sleep, occasionally during their life. Our work now demonstrates that the rate of recovery from stress factors can be a predictor of lifespan,” said Tim Pyrkov, PhD, head of mHealth R&D at the company.

For now DOSI is just a research tool, not a new anti-aging buzzword, Pyrkov said. He envisions DOSI scores used in studies to address various lifestyle interventions, such as nutritional supplements or an exercise plan. If valuable, the measure may find its way into medical records in the same way as cholesterol levels, he added.

But blood test monitoring is only the start, the proof-of-principle that it’s possible to predict the duration of healthy aging. “Wearable sensor data may be a handy alternative to invasive tests in obtaining a longitudinal set of data to calculate DOSI and resilience,” said Pyrkov. Step counts, heart rate, temperature, oxygen saturation, and blood pressure readings are packed with information.

Screenshot of Gero Lifespan App. Credit: Gero

The research from Gero.AI also predicted the theoretical limit of the human lifespan, an ideal that can only occur in the total absence of stress: 120 to 150 years. Perhaps health care should focus more on sustaining resilience than alleviating chronic disease, Pyrkov suggests.

Another company that tracks aging metrics uses more conventional measures.

Aging clocks and geroprotectors

Deep Longevity claims it can assess “biological age rates,” essentially coming up with it calls an individualized “aging clock.” The company offers Biological Age Reports that reflect data from blood tests, photo-aging (aka wrinkles), behavioral changes, gene expression measures (transcriptome and methylation data), microbiome characteristics, and psychological and heart health. The company gurus then compare age as estimated from this multifaceted input to chronological age, and provide suggestions and “personal coaches” to guide improvement of health and possible life extension.

“Our mission is to extend healthy productive longevity and increase human performance,” states the website.

Credit: Deep Longevity

In a sample of its ‘Biological Age Report’, a hypothetical person appears to be aging ahead of schedule. With that information, the next step might be advice to consult, a “database of current therapeutic interventions in aging and age-related disease.” Geroprotectors “integrates information about lifespan-increasing experiments and related compounds, suppression of aging mechanisms, activation of longevity mechanisms and age-related diseases obtained from research papers and databases.”

A “geroprotector” is “any intervention that aims to increase longevity, or that reduces, delays or impedes the onset of age-related pathologies by hampering aging-related processes, repairing damage or modulating stress resistance.” The database lists 259 chemicals that have been shown in experimentsa to extend life. Some are familiar, like aspirin, bacitracin, caffeine, turmeric curcumin, and maltose. But only 3 of the 259 entries reflect reports on people, for glucosamine, magnesium, and lithium. The rest come from work on fruit flies, roundworms, rats, or yeast.

A positive perspective on aging  

The assumption behind “life extension” is that living as long as possible is everyone’s goal. I don’t think that’s true. And I suspect it is mostly younger folk who are pioneering and promoting many of these efforts to track aging using signposts of molecular and cellular aging.

One company’s website boasts a crew of thirty-somethings festooned in tee-shirts that read “hacking aging.” But is that possible?

A score that predicts 30 or 40 or 50 years of theoretical good health ahead can’t account for falling victim to gun violence, dying in a war, or being whacked in the head with a flying board during a tornado. It’s an ideal. I can imagine someone glancing down at aging measurements on a smartphone and walking into the path of an oncoming truck.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Another fallacy of trying to slow, stall or reverse time is that getting older isn’t all that bad for many people.

An older person can get into many places or use services for free or at a deep discount, (thank you AARP) may have accumulated some wealth, had a satisfying career, head a family, and most importantly, have insight, knowledge, and perspective that can only accrue with direct experience — not modeling of someone’s ideal of gracefully aging and maintaining a youth appearance.

How can the people in the hacking aging tee shirts possibly evaluate what it is like to be 60? Or 80? To quote from the old Simon and Garfunkel song Old Friends, “How terribly strange to be 70.”

For age creeps up on you. With luck and good genes, you won’t notice that you can’t rise from rummaging through a bottom drawer without the aid of a chiropractor, or the gradual appearance of wrinkles and gray hair, or forgetfulness — not for a long time.

Ricki Lewis has a PhD in genetics and is a science writer and author of several human genetics books. She is an adjunct professor for the Alden March Bioethics Institute at Albany Medical College. Follow her at her website or Twitter @rickilewis

Viewpoint: Organic lobbyists show ‘sheer hypocrisy’ opposing UK emergency authorization of neonicotinoid sugar beet seed treatments — while supporting environmental waiver for ‘acutely toxic’ copper sulfate

The Government was right to make provision for a temporary and limited derogation for the use of the neonicotinoid seed treatment Cruiser SB on sugar beet for the 2021 season, although the colder conditions of recent months mean it will not be required this year.

The impact of virus yellows on last year’s beet crop for many growers was absolutely devastating and explains why the UK, after resisting in previous years, followed 13 EU member states in granting this emergency derogation. But I cannot help calling out the sheer hypocrisy of those in the organic and anti-pesticide lobby who portrayed the Government’s decision as heralding the extinction of all bees and other pollinating insects.

Sugar beets being harvested. Credit: T&T Seeds

Firstly, sugar beet is not a crop greatly frequented by pollinators and, secondly, the derogation granted was subject to highly restrictive conditions precisely to minimize adverse impacts on biodiversity.

What these campaigners conveniently overlook is that a similar emergency derogation was granted by Defra earlier in 2020 to spray copper hydroxide as a blight fungicide on organically grown potatoes. This was expressly advised against by the Expert Committee on Pesticides due to environmental concerns over acute aquatic toxicity.

My point here is not simply to take a cheap shot at the organic lobby — although I do think they have a case to answer when promoting their approach as ‘pesticide-free’, which manifestly it is not – but rather to emphasize that unless we are prepared to let our food crops rot in the fields, or to become vectors for harmful and potentially lethal mycotoxins spread by insect pests and crop infections, then we must enable technologies which control those pests and diseases.

Mycotoxins are naturally occuring toxins produced by certain moulds and fungi. Black mold fungi Aspergillus which produce aflatoxins and cause pulmonary infection aspergillosis. Credit: Getty Images

The same risk of infestation applies whether crops are grown conventionally or organically. Following a public consultation earlier this year, the Government is currently considering whether to allow innovative breeding technologies such as gene editing, which offer faster development of crop varieties with better and more durable pest and disease resistance.

The development of virus yellows resistance in sugar beet is a case in point. Research funded by Innovate UK points to promising genetic sources of virus yellows resistance in sugar beet.


I understand that integrating these novel sources of virus yellows resistance into elite beet varieties using conventional breeding could take 10-12 years, but with gene editing it could take as little as two to three years. The need for pesticides would be much reduced, and similar approaches could be envisaged for genetic control of late blight in potatoes.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

The outcome of the consultation will determine whether consumers can enjoy the benefit of these truly game-changing technologies with the potential to cut costs, cut pesticide use, improve food security and enhance biodiversity, while underlining the post-Brexit opportunity to resume our role as a world leader in innovation. I suspect many consumers would be concerned to learn that certified organic potatoes on sale in our supermarkets had been treated with a banned fungicide classed as toxic to aquatic life.

Like others before me I would therefore urge the organic lobby to keep an open mind on the potential of these precision breeding techniques to enable more environmentally friendly farming and food production in future.

Matt Ridley is a British journalist and businessman. He is the author of several books, including How Innovation Works: And Why It Flourishes in Freedom. Follow him on Twitter @mattwridley

A version of this article was originally posted at Farmer’s Guardian and has been reposted here with permission. Farmer’s Guardian can be found on Twitter @FarmersGuardian

How important is soil quality to addressing climate change? It’s critical, and here’s a blueprint for refocusing American R&D

COVID-19 and the resulting economic crisis have highlighted and exacerbated the plight of American farmers, while also threatening already dwindling public funding for agricultural R&D innovation — a significant driver of agricultural productivity and farmer incomes. Publicly funded soil science R&D has the potential to greatly increase farm productivity while simultaneously reducing the greenhouse gas emissions and environmental impacts of US agriculture.

Research into the soil microbiome has already led to biotech developments capable of improving crop yields while lowering input use and reducing fertilizer-related emissions and runoff — the full use of existing microbial biotech products can reduce GHG emissions from agriculture by 16 MMT CO2 e/year. Biofertilizers, microbial inoculants, and other soil microbiological developments can help farmers achieve long-term goals of economic viability and environmental sustainability.

While there has been much political and activist interest in using soil carbon sequestration as a source of carbon offsets and agricultural emissions mitigation — cover crops, for example, have the potential to reduce emissions by around 100 MMT CO2 e/year — agricultural practices that sequester carbon require further research to validate their benefits.

By creating an interagency soil science initiative, and utilizing the benefits of mission-oriented research programs, the government would create an environment capable of long-term innovation in soil microbiology, improve our understanding of soil carbon sequestration, and ensure a prosperous and sustainable future for American agriculture.


US agriculture has been hit hard by COVID-19. Farmers face falling prices, shrinking export markets, and increasingly hazardous labor conditions and shortages. These synchronous agricultural crises place rural and semi-rural communities in dire straits.1

While the difficulties facing US agriculture have no silver-bullet solution, agricultural research and development (R&D) is an important step toward boosting economic outcomes for farmers while combating climate change. Historically, publicly funded agricultural R&D played a key role in increasing productivity, economic development, and American pre-eminence among global agricultural producers.2

Today, R&D related to soil science is a promising option to help address these multifaceted challenges in agriculture. Advances in soil science, particularly regarding soil microbiomes, have the potential to increase farm yields and income while reducing nitrogen runoff, improving soil health, sequestering carbon, and reducing greenhouse gas emissions.3 To maximize these benefits, the federal government should create and invest in a soil science initiative.

The soil microbiome is in constant contact with plants. Credit: RAW Lab

Such an initiative should be aimed at the development of technological breakthroughs in soil microbiology through the creation of a soil microbiome research hub, and the identification and validation of soil carbon sequestration practices through academic or public research. Increased funding and coordination of soil science in the United States can lead to both short-term environmental and economic benefits, and longterm sustainable, green growth for American agricultural producers.

Current federal funding for soil science R&D

Current federal funding for soil science remains modest and without clear priorities. In total, the US federal government funds around $180 million of soil science R&D per year, equivalent to only 0.1% of total US federal R&D spending.4 This funding comes from several agencies and organizations: the National Institute of Food and Agriculture (NIFA), the Agricultural Research Service (ARS), the Foundation for Food and Agriculture Research (FFAR), the National Science Foundation (NSF), and the Department of Energy (DOE). Of these organizations, only ARS and FFAR list “soils” as a priority or “challenge” area of research. By contrast, in NIFA’s 2014-2018 strategic plans, the agency only mentions “soils” once, while mentioning “bioenergy” seven times.5

Credit: NSF

Federal programs for soil science R&D also suffer from a lack of coordination. For example, NIFA, NSF, FFAR, DOE, and ARS all fund research into soil sensor technologies, effectively funnelling research dollars into a subset of soil scientists instead of funding research and development across the spectrum of the discipline. While a lack of coordination does not preclude advances in the field, it severely limits short term R&D output and the long-term potential for scientific and technological breakthroughs.

With modest resources concentrated in few research areas, there is little reason to expect groundbreaking research.

The case for a soil science research initiative

To revitalize US agriculture, the federal government should create a research initiative centered solely on soil science R&D, and fund it accordingly. This initiative could work to coordinate priorities across federal agencies and other stakeholders — such as the Tri-Societies, land-grant university research centers, federal laboratories, and producer networks — while funding and advancing R&D programs in-house.

Policymakers ought to learn from the latest innovation policy research and design the initiative to follow a mission-oriented and applied research framework.6,7,8

Scholarship has attributed the success of mission-oriented research to two main factors. First, mission-oriented research directs focus on “concrete societal problems that can only be solved by multiple sectors interacting in new ways,” rather than funding basic research without a clear, direct application.9 The success of the Defense Advanced Research Projects Agency (DARPA) in spurring innovation both inside and outside of the military, for example, demonstrates the ability of mission-oriented research to set agendas and bring stakeholders together to target a specific challenge.10

Second, mission-oriented research initiatives build and foster networks capable of solving problems and innovating.11 For example, the Human Genome Project, a multi-institutional project funded by the National Institute of Health and the Department of Energy, to map the human genome succeeded in large part because of the decision to construct a network of researchers and organizations across state and national borders.12 Mission-oriented research initiatives can effectively avoid “network failures” — instances “when economic actors are unable to find appropriate network partners who are both competent and trustworthy.”13 Because innovation occurs through networked organizations and collaborations — especially technological and systematic innovation — network failures can stall and dampen innovation.

The incredibly long pathway to understanding the human genome. Credit: Sarah Brotman/Manisit Das

A federal soil science initiative could either be housed within one of the main R&D funding organizations, such as the NSF or NIFA, or stand alone. The recently approved Agricultural Advanced Research and Development Agency (AGARDA) platform is another potential home.

While the initiative should focus on a number of pressing issues related to American soils, two issues stand out: soil microbiology and soil carbon sequestration.

Centralizing soil microbiome R&D

Soil microbiome R&D has the potential to reduce the environmental impacts of agriculture while raising productivity and improving the economic viability of American farming. Despite this potential, public investment in soil microbiology and its impacts on agricultural applications remains low. If established, the soil science initiative should target soil microbiome R&D as a central priority in order to capitalize on its environmental and economic benefits.

Soil microbiology is a potential key to long-term gains in agricultural productivity, resilience, and profitability. Research into the relationship between soil microbiota and plants has the potential to revolutionize nutrient inputs for crop agriculture, drastically improve productivity, and increase carbon sequestration rates through biotechnological and other advances.14

Research into soil microbiomes has already led to viable products that improve nitrogen use efficiency and reduce the need for synthetic fertilizers, lowering emissions from both fertilizer production and use.15 For instance, Pivot Bio, a start-up based in the Bay Area, has engineered a nitrogen-producing microbe that farmers can apply to corn to increase yields, reduce fertilizer application rates, and cut fertilizer-related GHG emissions and leaching of nitrates into waterways.16 If Pivot Bio’s biotech products were used when possible, US agriculture would emit roughly 16 MMT CO2e/year less.17,18

Pivot Bio’s fertilizer. Credit: Pivot Bio

On top of direct emissions reductions, biofertilizers and other soil microbiome products have the potential to limit nitrogen runoff from farms. Currently, runoff from agriculture is responsible for eutrophication and biodiversity loss throughout the waterways and wetlands of the United States, causing tens to hundreds of billions of dollars in health, economic, and environmental costs.19

Soil microbiome R&D also has the potential to help improve the economic viability of American agriculture. Research into the soil microbiome has already returned findings that can improve the drought resistance of soils and crops, helping farmers protect their investments and ensure a viable product.20,21,22 Similarly, research has shown that soil microbial interventions can help improve crop disease resistance and reduce the need for synthetic fertilizer, thereby increasing farmer revenue while reducing input costs.23,24 The short-term benefits ultimately pale in comparison to the long-term possibilities of soil microbiome R&D. Although advances in metagenomics have led to an increased understanding of the soil microbiome, many soil microbes have yet to be identified and understood.25 Further research into the composition and function of soil microorganisms is needed to enable biotechnology and other interventions into  the soil  microbiome.26,27,28,29,30 These interventions, whether the result of genetic engineering or other microbiome manipulations, have the potential to drastically reduce fertilizer use while improving soil health and crop yields.31 Still, these breakthroughs are contingent on broad, interdisciplinary networks that incorporate work in microbiology, metagenomics, microbial physiology, and more.32

While current soil microbiome research funding exists through the Department of Energy, Office of the Chief Scientist, and USDA, the field remains decentralized and underfunded. Centering soil microbiome R&D within a broader soil science initiative could foster a research environment capable of producing key breakthroughs that would benefit US farmers and the environment in the short and long term.

One possible solution could be to create a specific soil microbiome research hub, funded through the broader soil science initiative, that could connect researchers across disciplines to an innovation infrastructure and research facility aimed at achieving the long-term productivity and environmental goals of soil microbiome R&D. By organizing a network of researchers, private organizations, and agricultural producers around the problems and questions related to soil microbiology, a centralized research hub could limit “network failures” and help actualize the potential of soil microbiome technologies. This hub could be located within a national laboratory, at an ARS research facility, or as a federally funded research center at a land-grant university.

Centralizing soil carbon sequestration research

Farming practices that sequester CO2 in soils have high potential to benefit farmers and the environment. Full adoption of one such practice — cover cropping — by American farmers can sequester around 100 MMT CO2 e/year, or roughly 1/6th of total US agricultural emissions.33,34 Soil sequestration practices have also been shown to improve soil health and drought resistance, and thus can increase crop yields and potentially reduce yield variability.35,36,37 This perceived potential has spurred policymakers to support paying farmers to sequester carbon, such as through the Growing Climate Solutions Act and Natural Carbon Sequestration Act.38

Yet, despite the breadth of possibility, further federally funded research is needed to maximize soil carbon sequestration on US farmlands.39,40 Specifically, research questions remain on how efficacious certain practices are — both in terms of carbon sequestration and soil health or yield improvements, how long carbon remains in the soil when sequestered, and whether measurement techniques exist to properly account for soil carbon levels.41,42,43

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

While the sequestering and soil health benefits of certain practices, namely cover cropping, are fairly well established, other practices, such as no-till, biochar amendment, or genetically modified crops targeted at soil sequestration, require further research and development.44,45,46 This research will be fundamental to establishing soil sequestration practices across US agriculture. Questions related to both long-term decomposition and stabilization of soil carbon and the validity of soil carbon measurement are integral to developing effective programs to increase sequestration.47 If soil carbon levels decompose over time or cannot be accurately measured with current technologies then farmers could not maximize carbon sequestration levels, limiting total emissions reductions and meaning carbon offsets from carbon farming would see corporations and other offset purchasers misconstruing their actual emissions.48,49

The lack of consistent and accurate measurements also imperils the bipartisan “Growing Climate Solutions Act” recently introduced in the Senate. The act aims to connect agricultural producers to technical assistance programs and private carbon sequestration incentives, such as Indigo Ag’s carbon offset market.50 Such a market, or similar initiatives  to pay farmers for stewarding the environment, could provide  much-needed income support for farmers, but currently requires better data on soil carbon to deliver guaranteed cost-effective mitigation. Continued research on soil carbon measurement and the effect of different practices on soil carbon would complement the act, generating the knowledge and data needed for programs and offset markets to direct funds toward where they result in the most sequestration.

A soil science initiative could centralize carbon sequestration R&D under a single roof. This centralized approach would bring scientists, producers, and policymakers together to better understand the long term effectiveness of sequestration practices and the policy options that could best take advantage of those practices. At the same time, a soil science initiative could establish a universal framework for measurement and validation of soil carbon levels, enabling both carbon offset payments and ecosystem service incentivization that both provide economic benefits to producers and mitigate GHG emissions.

Cross-cutting benefits of a soil science research initiative

With the dual challenges of economic hardships and environmental degradation matched by the recent contraction of American agricultural R&D investment, US farmers face real struggles. Historically, US public investment in agricultural R&D has driven the economic and environmental progress of the American agricultural system.51 If US policymakers seek to maintain the success and competitiveness of US agriculture, they ought to invest in and foster agricultural innovation. By creating and funding a soil science research initiative that seeks breakthroughs in soil microbiology, policymakers can do just that.

End notes:

1 John Newton, “Coronavirus Sends Crop and Livestock Prices in a Tailspin,” Farm Bureau, April 7, 2020.
2 Sun Ling Wang, Paul Heisey, David Schmmelpfennig, and Eldon Ball, Agricultural Productivity Growth in the United States:
Measurement, Trends, and Drivers, (Washington, DC: USDA ERS, 2015).
3 Science Breakthroughs to Advance Food and Agricultural Research by 2030, (Washington, DC: The National Academies
Press, 2018).
4 Federal Research and Development (R&D) Funding: FY2020, (Washington, DC: Congressional Research Service, 2020).
5 National Institute of Food and Agriculture Strategic Plan: FY2014-FY2018, (Washington, DC: 2014).
6 Matteo Deledi et al., The macroeconomic impact of government innovation policies: A quantitative assessment, (London, UK:
UCL Institute for Innovation and Public Purpose, 2019).
7 Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths, (London, UK: Penguin, 2018).
8 Daniel Sarewitz, “Saving Science,” The New Atlantis 49, (Summer 2016).
9 Matteo Deledi et al., The macroeconomic impact of government innovation policies, 5.
10 Matteo Deledi et al., The macroeconomic impact of government innovation policies, 5.
11 Fred Block and Matthew R. Keller, “Where do Innovations Come From? Transformations in the U.S. Economy, 1970-2006,”
in Knowledge Governance: Reasserting the Public Interest, edited by Leonardo Burlamaqui, Ana Celia Castro and Rainer Kattel (New
York: Anthem Press, 2011).
12 Block and Keller, 5-6.
13 Block and Keller, 20.
14 Science Breakthroughs, 117-118.
15 Science Breakthroughs, 117-119.
16 “How It Works,” Pivot Bio, accessed July 10, 2020,
17 Inventory of U.S. Greenhouse Gas Emissions and Sinks, (Washington, DC: EPA, 2020).
18 “Performance Report,” Pivot Bio, accessed July 20, 2020,
19 Sobota, Daniel J., Jana E. Compton, Michelle L. McCrackin, and Shweta Singh. “Cost of reactive nitrogen release from human
activities to the environment in the United States.” Environmental Research Letters 10, no. 2 (2015).
20 Anamika Dubey et al.,“Growing more with less: Breeding and developing drought resilient soybean to improve food security,”
Ecological Indicators 105, (October 2019).
21 Anamika Dubey et al., “Soil microbiome: A key player for conservation of soil health under changing climate,” Biodiversity
and Conservation, (April 2019).
22 Science Breakthroughs, 117-118.
23 Deepak Bhardwaj et al., “Biofertilizers function as key player in sustainable agriculture by improving soil fertility, plant tolerance and crop productivity,” Microbial Cell Factories 13, (2014).
24 Science Breakthroughs, 117-119.
25 Science Breakthroughs, 117.
26 Science Breakthroughs, 118.
27 Janet Jansson and Kirsten Hofmockel, “The Soil Microbiome — from metagenomics to metaphenomics,” Current Opinion in
Microbiology 43, (June 2018).
28 Posey E. Busby et al., “Reserach priorities for harnessign plant microbiomes in sustainable agriculture,” PLOS Biology 15(3),
(March 2017).
29 Noah Fierer, “Embracing the unknown: disentangling the complexities of the soil microbiome,” Nature Reviews Microbiology
15, (2017).
30 Yu Cao et al., “A Review of the Applications of Next Generation Sequencing Technologies as Applied to Food-Related Microbiome Studies,” Frontiers in Microbiology, (September 2017).
31 Science Breakthroughs, 117-119.
32 Science Breakthroughs, 118.
33 Joseph E. Fargione et al., “Natural climate solutions for the United States,” Environmental Studies 4(11), (November 2018).
34 Inventory of U.S. Greenhouse Gas Emissions and Sinks, (Washington, DC: EPA, 2020).
35 Cover Crop Survey: September 2017, (Washington, DC: SARE, 2017).
36 Rattan Lal, “Soil health and carbon management,” Food and Energy Security 5(4), (November 2016).
37 “Soil Health and Yield Stability,” Soil Health Institute, Accessed July 28, 2020.
38 “H.R. 7393 – Growing Climate Solutions Act of 2020,”, Access August 7, 2020.
39 Janet Ranganathan, Richard Waite, Tim Searchinger, and Jessica Zionts, “Regenerative Agriculture: Good for Soil Health, but
Limited Potential to Mitigate Climate Change,” World Resources Institute, May 12, 2020.
40 Keith Paustian et al., “Climate Mitigation Potential of Regenerative Agriculture is Significant!,” https://static1.squarespace.
41 Negative Emissions Technologies and Reliable Sequestration, (Washington, DC: National Academies of Science, 2019).
42 Alex Smith and Dan Blaustein-Rejto, “The Limits of Soil Carbon Sequestration,” The Breakthrough Institute, March 9, 2020.
43 Johannes Lehmann et al., “Persistence of soil organic carbon caused by functional complexity,” Nature Geosciences 13, (2020).
44 Fargione, “Natural Climate Solutions”
45 Negative Emissions Technologies, (Washington, DC: National Academies of Science, 2019).
46 “The Salk Institute’s Response to the House Select Committee on the Climate Crisis’ Request for Information,” The Salk Institute,
47 Negative Emissions Technologies, (Washington, DC: National Academies of Science, 2019)
48 Smith and Blaustein-Rejto, “The Limits of Soil Carbon Sequestration.”
49 James Temple, “Why we can’t count on carbon-sucking farms to slow climate change,” MIT Technology Review, June 3, 2020.
50 “Growing Climate Solutions Act set to be introduced in U.S. Senate,” Senator Mike Braun, Accessed August 9, 2020, https://
51 Wang et al., Agricultural Productivity Growth in the United States

Alex Smith is a food and agriculture analyst at the Breakthrough Institute. He has completed a dual MA/MSc in International and World History from Columbia University and the London School of Economics and Political Science. Find Alex on Twitter @alexjmssmith

Dan Blaustein-Rejto is a food and agriculture analyst at the Breakthrough Institute. He holds a Master’s in public policy from the University of California Berkeley. Find Dan on Twitter @danrejto

A version of this article was originally published at The Breakthrough Institute and has been republished here with permission. The Breakthrough Institute can be found on Twitter @TheBTI

Africa cracks down on ravenous locust swarms by ignoring Greenpeace’s anti-pesticide rhetoric

As nations around the world struggle with COVID-19 and related economic lockdowns, African countries are fighting another plague: swarms of locusts. Massive crop damage during 2020 left many Africans hungry, and new swarms now threaten 2021 crops. Fortunately, some national and local governments are making headway in getting the locusts under control through aerial spraying of pesticides, despite challenges created by anti-pesticide pressure groups.

As I noted last year, Greenpeace and other green activist groups have made locust and other pest control operations more expensive and increasingly difficult to manage because they push pesticide bans and regulations that reduce options and raise costs. People living in developing countries suffer the most.

Credit: FAO

Nonetheless, with the help of humanitarian organizations, some nations and localities have been able to deploy aerial pesticides with the few remaining products on the market, and they are seeing some successes—underscoring the essential role of aerial spraying.

A recent situation report from the Food and Agricultural Organization (FAO) explains that things have improved in Kenya because of aerial spraying, in addition to weather less favorable to the locusts:

“The present situation in the Horn of Africa differs significantly from one year ago. The current swarms are smaller in size and less numerous. So far, the swarms have not matured or laid eggs. Very little rain has fallen since the end of the short rains last year. Intensive aerial control operations, supported by ground teams, are well-established and making good progress in reducing locust infestations.”

There is also positive news from Tanzania. A story in the Tanzania Daily News reports that in the Siha district “huge swarms of desert locusts invaded Tanzania’s northern Kilimanjaro region from Kenya, darkening horizons and causing panic among farmers, who feared destruction of their crops.” The government was quick to respond, “deploying special planes to spray pesticides in the affected areas.”

A local public official reports on progress:

“Siha is now free from desert locusts; there is no single of them that is alive in our district. How good it is to have proper communication, for the challenge arose, growers panicked but informed us, in turn we communicated the issue to the Ministry of Agriculture that acted quickly and now the challenge is over.”

Such efforts must continue as locust swarms emerge throughout the spring and spread across Africa and into AsiaAccording to FAO, 42 million people now face “severe acute food insecurity” in locust-infested regions. Somalia has not had enough resources to pay for spraying, and now reports indicate that 3 million people are suffering from food shortages there. Somalia’s public officials have declared a state of emergency with the hope to gain financial support from the FAO to pay for spraying.

Yet Greenpeace continues to advocate against the last few effective pesticide products remaining on the market—such as chlorpyrifos—making unwarranted claims about the risks of spraying. The group suggests that governments should rely on “biopesticides” instead. It is true that biopesticides and other technologies can play a role now and in the future, but they have serious limitations that Greenpeace fails to mention.

“Compared with chemicals … the biopesticide takes longer to kill the locusts, so it is more useful before the hopper bands of young locusts have begun to fly,” explained an FAO representative in an article for Science Daily. The article also points out that addressing large locust swarms “requires fast-acting chemical pesticides sprayed from aircraft.” Ideally, Africans would eventually have affordable controls that would prevent the swarms, but until they do, aerial pesticide spraying remains the essential means of control, particularly during massive swarm situations.

Accordingly, Greenpeace’s suggestion that biopesticides could replace traditional pesticides is similar to green activists’ advocacy of renewable energy sources and demonization of more reliable energy sources, such as coal. When policy makers followed that advice, the result was a far less reliable electricity grid—one that fails when it’s needed most—as Texans recently learned.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

In the final analysis, locust control officials need the freedom to leverage the best tools for each situation. Factors such as accessibility, affordability, and effectiveness play a major role in their very situation-specific decisions. It is not helpful when activist groups second guess pest control operations and unfairly malign valuable control methods, while offering only insufficient alternatives.

Food security for millions of people is at stake, which underscores the need for pest control operations to disregard misinformed activist advice and focus on what works.

Angela Logomasini is a senior fellow at the Competitive Enterprise Institute. Logomasini specializes in environmental risk, regulation and consumer freedom.

A version of this article was originally posted at the Competitive Enterprise Institute and has been reposted here with permission. The Competitive Enterprise Institute can be found on Twitter @ceidotorg

Genomic Cold War? More nations joining the US in using biotechnology to enhance military capabilities

The UK government recently announced an £800 million, taxpayer-funded Advanced Research and Invention Agency (Aria). The brainchild of the British prime minister’s former chief adviser, Dominic Cummings and modelled on the US Defense Advanced Research Projects Agency, Darpa, the organisation will focus partly on genomic research.

Genome technology is becoming an increasingly important part of military research. So given that the UK boasts some of the best genomic research centres in the world, how will its new agency affect the wider genome technology warfare race?

In 2019, Darpa announced that it wishes to explore genetically editing soldiers. It has also invested over US$65 million (£45 million) to improve the safety and accuracy of genome-editing technologies. These include the famous Nobel prize-winning Crispr-Cas molecular scissor – a tool that can edit DNA by cutting and pasting sections of it.

But the ease of accessibility and low cost of Crispr-based technologies has caused concern around potential military genetic modification and weaponisation of viruses or bacteria. These include smallpox or tuberculosis, and could be extremely destructive.

The US is not alone in its military pursuit of genome technology. Russia and China have either stated or been accused of using genomic technology to enhance military capabilities.

The super soldier

Universal Soldier and Captain America are just a few Hollywood movies that have explored the concept of the super soldier. Despite its sci-fi nature, several countries are looking to explore the potential of such prospects. Darpa intends to explore genetically editing soldiers to turn them into “antibody factories”, making them resistant to chemical or biological attacks.

Jean-Claude Van Damme, Dolph Lundgren and Tommy “Tiny” Lister line up in a scene from the 1992 movie “Universal Soldier.” Credit: TriStar/Getty Images

In December 2020, the then US director of national intelligence, John Ratcliffe, said there was evidence that the Chinese military was conducting human experimentation in an attempt to biologically boost soldiers. This followed a report by the Jamestown policy thinktank that highlighted reports suggesting that Crispr would form a keystone technology in China to “boost troops’ combat effectiveness”. No further details were given, however.

Not all countries are prepared to use gene editing or even genomic technology to enhance soldiers, however. The French military ethics committee has recently approved research on soldier “augmentation”, such implants that could “improve cerebral capacity”. However, the committee warned that certain red lines could not be crossed, including genome editing or eugenics. In the more candid words of the French minister of the armed forcesFlorence Parly, this amounted to “A yes to Ironman, but a no to Spiderman” (Ironman gets his superpowers from a suit whereas Spiderman is bitten by a radioactive spider).

In Russia, the military is looking to implement genetic passports for its personnel, allowing it to assess genetic predispositions and biomarkers, for example, for stress tolerance. This could help place soldiers in suitable military lines, such as navy, air force and so forth. The genetic project also aims to understand how soldiers respond to stressful situations both physically and mentally.

The UK position

There are signs that the UK will be bolder and less accountable in its genetic defence research than many other countries. For example, Aria won’t be subject to freedom of information requests, in contrasts with Darpa.

The UK has also been at the forefront in enabling controversial, pioneering non-military genome technology, such as three-parent babies. And there has been no shortage of government reports that have stressed the importance of genome technology in the domain of defence and security.

In 2015, a UK national defence review highlighted the influence that advances in genetic engineering can have for “security and prosperity”. In the recent 2021 Security, Defence, Development and Foreign Policy review the UK government once again stressed its significance for “defence and national security”.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

The proposed lack of accountability of Aria, combined with the government’s general mission for genome technology to be expanded into security and defence applications, will create a hotpot of debate and discussion. In recent years, British scientists have received Darpa funding for controversial genomic research, such as genetic extinction of invasive species such as mosquitoes or rodents. Despite its promise, this could have disastrous potential to damage food security and threaten the wider ecosystems of nations.

Genome technology deployment needs to be managed in a universally, ethically and scientifically robust manner. If it isn’t, the potential for a new arms race for advances in this research will only lead to more radical and potentially dangerous solutions. There are many unanswered questions about how Aria will help genome research within the military sphere. The pathway the UK chooses will have lasting consequences on how we perceive genome tech in the public space.

Yusef Paolo Rabiah is a PhD Candidate at STEaPP UCL. Yusef’s PhD is focused on developing public policy frameworks for the introduction of germline genome editing technologies into the UK. Find Yusef on Twitter @PaoloYusef

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

Incurable Huntington’s disease? microRNA offers hope in the wake of failed clinical trials

A recent DNA Science post considered the ebb and flow of treatment possibilities for Alzheimer’s disease. This week, it’s Huntington’s disease.

Like Alzheimer’s, the less familiar HD also affects the brain, but HD is always inherited and is much rarer. The only treatments for HD manage symptoms, some of them prescribed off-label, borrowed from other conditions. A treatment that addresses the underlying cause of the disease, which delays onset or slows progression, has been elusive for decades.

A disease like no other

HD is one of 40 “expanding repeat” diseases. A tiny part of a gene repeats many times, resulting in an encoded protein burdened with extra amino acids that interfere with its folding, rendering the protein sticky. In HD, mutant huntingtin protein gums up neurons in the brain’s striatum, blocking signals essential to control movements and to think. Behavior changes too – anger and aggression may soar, as irritability, loss of impulse control, and confusion reign. The white matter part of the brain – the axons of those neurons – shrinks.

In HD, a triplet of DNA bases – CAG – in the huntingtin gene repeats more than the normal 35 times. The more extra CAG copies the younger symptom onset, the more severe the uncontrollable movements, and the faster the physical and mental deterioration.

Credit: NIST

HD is inherited as an autosomal dominant: each child of an affected individual stands a 50:50 chance of having inherited the disease. An affected person has both a normal and mutant copy of the gene.

Because symptom onset is typically in adulthood, before predictive testing became available in the mid 1990s, people would have children before they knew they were destined to develop the disease. DNA Science covered the history of HD, and what it’s like to lose family to it, here.

Considering this clinical picture of devastation against a backdrop of a dearth of treatments for a rare disease that might not get much attention in the midst of a pandemic, setbacks in clinical trials are especially hard to take.

Drug trials halted

On March 22, Roche ceased treating new patients in its phase 3 clinical trial of tominersen, following recommendation from an independent data monitoring committee, part of the normal trajectory towards drug approval. Tominersen is a small molecule designed to glom onto the extra parts of the huntingtin gene, blocking it so that a near-normal sized gene is available for transcription into RNA and then translation into huntingtin protein. DNA Science covered the launch of the phase 1 clinical trial in 2015, from Isis Pharmaceuticals, which rebranded to Ionis with the rise of the terrorist group. Roche partnered with Ionis in December 2017 to further develop the drug.

No one was harmed from tominersen, but so far the data analysis indicates that no one was helped. What actually happened is difficult to discern from media statements. For example, the committee stated that their thumbs-down wasn’t due to “any new emergent safety concern, but on a broad assessment of the benefit/risk.” A Roche rep cited an “unfavorable efficacy trend.” I suppose that means that under the conditions of the phase 3 trial, the drug might have lowered huntingtin protein levels, but didn’t have a detectable effect on symptoms or rate of disease progression. I await publication of a paper to learn the details.

Results of the phase 3 study, called GENERATION HD1, were anxiously anticipated because its the largest clinical trial for HD so far. And the findings will be useful no matter what they are. “The data generated will significantly advance our understanding of huntingtin-lowering as a potential treatment approach,” said Levi Garraway, MD, PhD, chief medical officer at Roche. The company has two ongoing trials of the drug at earlier stages.

The second dose of bad news in the HD community came on March 29, when Wave Life Sciences announced that the lowering of the abnormal huntingtin protein in an early phase clinical trial, PRECISION-HD, “didn’t support” further development of two drug candidates. Their technology targets a SNP found in the mutant gene but not the normal version. (A “single nucleotide polymorphism” is one site in a gene that differs in a population.) The treatment is a synthetic small piece of nucleic acid, slightly different from the natural version, that binds to and silences the mRNA transcribed from the mutant gene.

In one trial of 88 participants, the drug didn’t change levels of abnormal huntingtin. In the other trial of 28 people, levels diminished, but “effects were inconsistent.” Analysis of protein level in the cerebrospinal fluid indicated that dosing high enough to affect symptoms safely might not be possible.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

These results are disappointing, but further analysis of the data from the discontinued trials could have value. Might analyzing the participants one by one reveal a few who did improve? And if so, what do they share?

Jan Nolta, PhD, director of the stem cell program at the UC Davis School of Medicine and director of the Institute for Regenerative Cures, who works on HD, explains the possible silver lining:

Please stay tuned for further information. Once the data are unblinded, of those receiving the therapy rather than placebo, there might be certain people who were “responders” whereas it wasn’t helping others and that flattened the overall curve – this info needs to be carefully analyzed. The therapy could be truly helping some participants. Then a future trial could be tailored to those fitting those inclusion criteria once it is figured out. It’s important for us all to keep in mind that the trials can be an iterative process.

Her Rx: hope.

The seeming setbacks actually mirror how science works: if the hypothesis is disproved, go back to the drawing board and design new experiments to test in other ways.

A microRNA gene therapy strategy

The metaphor of drug development as a pipeline is apt. New options are always arising.

While the HD community has been reeling from the two recent clinical trial disappointments, gene therapy company uniQure announced enrolling the first few patients in a phase 1/2 clinical trial of their treatment for early HD, called AMT-130. It is a gene therapy that is delivered surgically to the striatum, at HD Centers of Excellence, with nine sites in the US and Europe starting in a few months. Patients will be randomized to receive the gene therapy or sham surgery. The blinding – which patient received which intervention – will remain for a year, and then the participants followed for five years, and disease progression compared.

AMT-130 is a viral vector (adeno-associated virus 5) that delivers DNA encoding a microRNA that lowers huntingtin protein. MicroRNAs are tiny RNA molecules that function as “dimmer switches” to control expression (transcription of mRNA) of specific sets of genes. They’re normally in cells, and so AMT-130 harnesses a natural process. The goal is to silence disease-causing genes without harming other genes. The microRNAs may even travel neuron-to-neuron in fleets of exosomes, the tiny natural bubbles that ferry molecules between cells.

Credit: NIH

To accompany announcement of the start of the clinical trial, a paper in Science Translational Medicine describes the microRNA-based gene therapy in a minipig model. The title is encouraging: “Widespread and sustained target engagement in Huntington’s disease minipigs upon intrastriatal microRNA-based gene therapy.”

MRI revealed that the injected viral vectors were distributed throughout the pigs’ brains and lowered levels of the abnormal huntingtin protein 30% to more than 75%, depending on the brain region. The microRNAs were still dampening protein production a year later. So the clinical trials just getting underway will reveal whether those effects translate into improved symptoms in people.

The halting of clinical trials is devastating news, especially for a rare disease with no treatments. But with eclectic strategies in the works to silence or obliterate the errant expanding gene, researchers will, one day, find an approach that works – or more than one.

Ricki Lewis has a PhD in genetics and is a science writer and author of several human genetics books. She is an adjunct professor for the Alden March Bioethics Institute at Albany Medical College. Follow her at her website or Twitter @rickilewis

A version of this article was originally posted at PLOS and has been reposted here with permission. PLOS can be found on Twitter @PLOS

‘Flawed process leads to flawed science’: Why the WHO’s International Agency for Research on Cancer claims glyphosate causes cancer

The recent excellent article by Josh Bloom, “NYC Pol Uses Phony Cancer Scare & ‘Children’ to Ban Glyphosate in Parks,” talks about the scare tactics used by a council member in New York to ban glyphosate (Roundup) from city parks. I’m taking a deeper dive looking at how the WHO’s International Agency for Research on Cancer (IARC) and our EPA determine whether or not glyphosate causes cancer. A flawed process leads to flawed science, which like radioactivity – stays around forever!

IARC. Credit: Kalliergeia

The following is the essence of the controversy:

  • IARC has classified glyphosate as “probably carcinogenic to humans.”
  • The EPA has classified glyphosate as “not likely to be carcinogenic to humans.”

How is it possible that two organizations had the same data available and came to opposite conclusions?

The process

The process used to determine whether or not a chemical causes cancer is to examine four types of studies:

  • Epidemiology (human) studies
  • Studies in laboratory animals
  • Genotoxicity studies – the damage to genetic material
  • Exposure studies – how people are exposed to the chemical

There were fundamental differences between the studies selected for evaluation by the EPA and IARC that contributed to the different conclusions:

EPA: EPA collected data by searching the “literature” and other publicly available sources, from their internal databases and studies submitted to the EPA by industry for pesticide registration purposes.

IARC: IARC limited its data collection to peer-reviewed studies available in the open literature. No industry-submitted studies, not available in the public domain, were included.

But the type of studies selected, including their quality, determine the results.

Limiting the data is never good for the scientific process.

Because of their selection criteria, IARC did not examine the complete database. IARC followed the old mantra that “industry is bad”; therefore, industry studies must be tainted, and that only peer-reviewed published studies can be trusted. But the reality is that studies carried out by industry are just as valuable as studies carried out by academics. Peer-review does not guarantee quality studies – some of the most significant cases of fraud in science were carried out by scientists who published in peer-reviewed journals. I don’t often compliment the EPA, but in this case, they examined all the data they could find and didn’t hide behind the mantras of “industry is bad” and “peer review is good.”

Not all studies are equal

The EPA evaluates the totality of a study using a weight-of-evidence approach. Studies are assessed based on their quality, and higher-quality studies are given more weight in the analysis than lower-quality studies. Quality is assessed based on many factors, including whether: appropriate methodologies and statistical methods were employed; sufficient data and details were provided about how the study was carried out, and exposure measurements were clear and well-defined.

Credit: Precision Nutrition

In a weight of evidence approach, if one study involving a large number of people with good exposure measurements of the chemical showed no increase in cancer, and several small studies, without good exposure measures showed an increase in cancer, the larger study would be given more weight in the overall evaluation than the two smaller studies.

IARC, on the other hand, used a “study-by-study” approach in their analysis. Each study was examined separately, and the studies were not weighted against one another. The philosophy underlying their analysis was “one positive study outweighs all the negative studies,” which is in line with the adoption of the “Precautionary Principle” by the European Union. The Precautionary Principle states that uncertainty about possible hazards is a strong reason to ban or limit the use of chemicals or technology.

Additionally, IARC did not follow any established procedure for evaluating the strength or quality of the studies.  Although IARC states they evaluate study quality, the procedure they followed was not published – so much for transparency.

Some data specifics

Genotoxicity studies examine damage to an organism’s DNA. Some genotoxicity studies are done in cells and are quick, inexpensive tests that determine whether a chemical causes a mutation in the cell. Other genotoxicity tests are done in laboratory animals and examine damage to the animal’s genetic material. But because a chemical causes damage to DNA does not mean it causes cancer, and conversely, some chemicals that cause cancer do not damage DNA.  Therefore, both EPA and IARC use genotoxicity studies as secondary evidence in the evaluation of cancer-causing substances. It is not the definitive factor in the analysis.

As discussed by Dr. Bloom, the EPA measures risk, while IARC measures hazard. The EPA uses studies on exposure to the chemical to determine the likelihood that cancer will occur in the general population (risk). IARC considers whether the chemical could cause cancer under any circumstance, even the very unlikely (hazard).

As every toxicologist knows: The dose makes the poison. Therefore, the results from a study in which workers were exposed to very high levels of a chemical have far less relevance to the general population exposed to very low levels of a chemical.

Overall analysis

Epidemiologic Studies: The EPA considered 58 studies, IARC 26. IARC concluded that there was limited evidence in humans because one study showed a positive association with Non-Hodgkin’s lymphoma. The EPA felt this was “insufficient evidence in humans” because other studies of equal quality contradicted the one study that showed a possible association with Non-Hodgkin’s lymphoma.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Animal Studies: The EPA considered 14 studies, IARC 10. The EPA concluded that there was insufficient evidence in animals because, although positive trends were seen in several studies, the tumor findings were not reproduced in studies of equal quality. IARC concluded that there was sufficient evidence in animals because positive trends were reported in several studies. “Positive trends” does not imply statistical significance; it just means that several data points showed an increase in tumors.

Genetic Toxicity: The EPA considered 104 studies, IARC 1118. The EPA concluded that there was no overall evidence of genotoxicity because the overall weight-of-evidence was negative. The positive results were only seen at high doses, not relevant to human exposures. IARC concluded that there was strong evidence of genotoxicity based on the positive results seen in some studies in rats and mice.


In summary, 1) IARC examined a limited number of studies based on their rejection of studies done by industry unpublished in peer-reviewed journals, 2) They used a study-by study approach where one positive study outweighed many negative studies of equal or better quality, and 3) By focusing on hazard rather than risk they ignored the empirical findings of exposure data. IARC’s process of evaluation is flawed, leaving the impression that they had a predetermined outcome.

The use of a flawed process leads to bad science that never goes away. Particularly in this world of social media and TV pundits, bad science gets repeated over and over again and accepted as gospel. Simultaneously, the objections are ignored or forgotten as the media searches for quick and eye-catching headlines.

Susan Goldhaber, M.P.H., is an environmental toxicologist with over 40 years’ experience working at   Federal and State agencies and in the private sector, emphasizing issues concerning chemicals in drinking water, air, and hazardous waste.  Her current focus is on translating scientific data into usable information for the public. 

A version of this article was originally posted at the American Council on Science and Health and has been reposted here with permission. The ACSH can be found on Twitter @ACSHorg

The COVID pandemic has changed the future of CRISPR and synthetic biology, says Rahul Dhanda, Sherlock Biosciences CEO

It’s been a transformative year for synthetic biology and for the world. The COVID-19 pandemic exposed weaknesses in our global health systems but also validated the power of synthetic biology to rapidly develop critical diagnostics and mRNA vaccines. I spoke with Rahul Dhanda, CEO of Sherlock Biosciences, one of the leading Crispr diagnostics companies.

[May 3], Sherlock announced a new partnership with LogicInk to leverage Sherlock’s Crispr platform to develop a wearable COVID-19 test. What is synthetic biology’s role in the next stages of the pandemic and what could the next 12 months look like for the industry and for Crispr-based diagnostics?

Rahul Dhanda, CEO of Sherlock Biosciences. Credit: Doug Levy

John Cumbers: It’s been a year since Sherlock Biosciences received approval for the first-ever FDA authorization of a Crispr product. What impact has this had for both Sherlock and synthetic biology?

Rahul Dhanda: I think that authorization was historic for both Sherlock and the synthetic biology community because it represented a translation of these technologies into actual healthcare use. The impact for Sherlock has been a vast acceleration of the platform.

For synthetic biology, I think there’s this moment where the pandemic has given us an opportunity and synthetic biology has actually delivered on the promise that it’s always had. In addition to mRNA therapeutics, the community has accomplished the goals of rapid design and rapid response to rapid impact in very cost-effective ways. I feel like this past year has moved the industry from an idea to something that is very concretely recognized as a real solution.

JC: What is the 221b Foundation?

RD: The 221b Foundation is a nonprofit that Sherlock established to place our CrisprCOVID IP in an open innovation model for anyone to access if they want to develop a Crispr diagnostic to fight the pandemic.

The goal of it is not just to make more solutions available for COVID testing and addressing the pandemic. It’s also to take those profits and reinvest them in STEM education opportunities, particularly for minorities, young girls, and women. The profits that we make from [the pandemic] are something we feel we need to be giving back.

A pandemic is not an opportunity to just hoard and harvest profit. It’s something that has accelerated so many companies and so many platforms, we felt the responsible thing was to take the economic benefits and reinvest them in the communities. We want to make sure that for things we can’t necessarily influence with our products, we can have more of an influence in terms of establishing health equity.

If we can increase the representation of those who aren’t always represented in STEM programs, we can also increase the way that those disciplines think about medicine and those patients who aren’t always receiving the best of care or equal treatment.

JC: How does LogicInk fit into the overall vision for Sherlock?

RD: Sherlock’s vision is really to make sure that individuals can take control of their healthcare. LogicInk has a technology that can leverage Crispr in a way that potentially delivers instrument-free, power-free, self-administered tests similar to our INSPECTR platform. Our goal isn’t just to develop our technologies. Our goal is to use the advantages we have in our platforms and the insights that we’ve gained to make sure that innovation across the industry continues to deliver the best solutions for patients globally.

JC: As more strains of SARS-CoV-2 emerge and evolve, what can the synthetic biology industry do to respond quickly and to reach underserved populations?

RD: Synthetic biology offers a very unique set of solutions to problems like emerging and evolving strains. One of the things that I think is most important is that these tools—whether they’re therapeutic or diagnostic—have been proven to be very robust and rapid in their response to new information. By taking the genetic sequences from these new strains, we can rapidly develop diagnostic tests. Now that we’ve proven [synthetic biology techniques] with mRNA vaccines, we can also rapidly develop specific responses to emerging strains. This rapid response won’t just be in the vaccine space, we’re going to see this in a therapeutic space as well.

I think half of the equation is just how powerful these tools are in quickly responding, designing, and building kinds of therapeutic and diagnostic products. The other half is that the efficiencies we’re gaining from these new ways of doing things are driving up both scale and reach; the more of this we do, the more that we will have economies of scale. Even in the absence of economies of scale, these techniques have proven to be much more cost-effective than traditional techniques. Not only are we making better solutions more quickly, but we’re also making them cheaper and more accessible.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

JC: The last 12 months have been transformative for the synthetic biology industry in so many ways. What did the next 12 months look like for Sherlock?

RD: For Sherlock, it’s really taking these two platforms we have—Crispr and our synthetic gene network and cell-free system, INSPECTR—and driving those platforms forward. We have learned a tremendous amount from the Crispr product we developed. We’ve granted almost half a dozen licenses to [deliver new products] with partners to take our technology into various forums in various geographies. And we’ve channeled what we’ve learned into INSPECTR, which is a unique, instrument-free, power-free, self-administered test that has lab-accurate results equivalent to any PCR.

What we’re anticipating over the next 12 months is furthering both of these platforms but they are now going into “product focus.” We’re developing a COVID base respiratory test with our INSPECTR platform. We anticipate we’ll be expanding our menu as well as expanding our products and doing that on a more rapid timeline. The next 12 months for Sherlock is moving what is an early-stage product development company into a more mature, more robust company that more frequently launches products.

JC: It’s been great talking with you, Rahul. I’m looking forward to what comes next.

RD: Thanks so much, John, this has been great.

John Cumbers is the founder and CEO of SynBioBeta, the leading community of innovators, investors, engineers, and thinkers who share a passion for using synthetic biology to build a better, more sustainable universe. He is an operating partner and investor at the hard tech investment fund Data Collective, and a former bioengineer at NASA. Follow him on Twitter @johncumbers and @SynBioBeta

A version of this article was originally published at Forbes and has been republished here with permission. Find Forbes on Twitter @Forbes

Science Facts and Fallacies Podcast: COVID’s mysterious origins; Why some anti-vaxxers got their shots; Unwise J&J ‘pause’?

Where did SARS-COV-2 come from? For most of the last year a natural origin story was the predominant view among experts. But the situation isn’t so clear today. Why did some vaccine skeptics end up getting their COVID shots? And could their stories help expand vaccine uptake moving forward? With the benefit of hindsight, what can we say about the now-infamous J&J vaccine “pause” in April?

Join GLP contributor Cameron English and guest host Ally Kennedy on this episode of Science Facts and Fallacies as they break down these latest news stories:

Considered a conspiracy theory just a year ago, the possibility that SARS-COV-2 was leaked from a laboratory is now a serious hypothesis, alongside the idea that the virus naturally evolved and jumped from other animals to humans. This status change for the so-called “laboratory spillover” explanation was prompted by a China–WHO joint study released in early 2021. The 313-page investigation dedicated just four pages to the possibility of a lab accident, prompting 16 scientists to call for a more thorough examination of how the virus arose:

We must take hypotheses about both natural and laboratory spillovers seriously until we have sufficient data. A proper investigation should be transparent, objective, data-driven, inclusive of broad expertise, subject to independent oversight, and responsibly managed to minimize the impact of conflicts of interest. Public health agencies and research laboratories alike need to open their records to the public.

Why do some vaccine-hesitant Americans ultimately get their COVID shots? For some it’s a simple matter of getting to attend baseball games again, while others worry that they won’t be able to care for their children if they get infected. Whatever their motivations, their stories could offer insights to public health officials eager to convince even more vaccine skeptics to come around.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

The CDC’s and FDA’s decision to pause distribution of Johnson & Johnson’s COVID-19 shot had a noticeable impact on vaccine uptake. Moreover, immunization skeptics used the confusing messaging (if the shot is safe, why “pause” it?) to spur their cause, creating a sort of negative feedback loop of vaccine hesitancy. Was the pause necessary, and what can we learn from the ensuing controversy before we face new infectious disease threats in the future?

Ally Kennedy is a premedical student at the University of Florida. She has a particular interest in infectious disease, pain neurology—and more recently, COVID-19. Visit her website and follow her on Twitter @AllyAnswers

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Follow him on Twitter @camjenglish

Viewpoint: ‘Killer petunias’? The shameful story of the USDA’s ‘unscientific, innovation-stifling’ process for regulating genetically-engineered flowers

Now that vivid orange, red, and purple genetically engineered petunias are approved for distribution to plant nurseries, it’s time for an accounting of the botched process followed by the US Department of Agriculture that kept these beautiful plants, and other GMO flowers, off the market for years.

Let’s go back to 2017 when USDA announced that many varieties of petunias had to be withdrawn from distribution and destroyed, solely because the harmless—but strikingly beautiful—flowers had not run the required regulatory gauntlet to be ‘approved for sale’.

Why do petunias require a government seal of approval?  The answer is that they shouldn’t. The beautiful colors were the result of a harmless process of gene tweaking using highly precise and predictable molecular techniques (recombinant DNA technology) for genetic modification. But hat turned them into ‘GMOs’—the scourge of anti-biotechnology activists and the target of functionaries in Washington who are always alert to opportunities to arrogate to themselves new regulatory responsibilities. 

Why the 2017 ban?

Government officials readily acknowledged in 2017 that GE petunias posed no threat to human health or the environment—and likely were sold, under the radar, for years in the US and throughout Europe (to no particular effect other than beautifying gardens). Nevertheless, as a manifestation of sheer bloody-mindedness, regulators directed sellers to destroy the flowers, simply because they didn’t have a permit—which the USDA would not issue for genetically engineered flowers without copious testing and bureaucratic red tape that could cost tens of millions of dollars and take many years.

Credit: ReBloom

In January of 2021, four years later, USDA finally approved, or ‘deregulated,’ the genetically engineered petunias. Now their availability in the US will be up to the plant breeding company that was granted approval, not to anonymous bureaucrats in Building 13 at the Agriculture Department. (It is noteworthy that Canada approved them for sale three years ago.) Look for them in the US in spring 2022, breeders say.

“This is the culmination of two-and-a-half years of work on behalf of our breeders and the U.S. government,” said Christian Westhoff, President of the German plant breeding company which has already developed varieties. “This marks a milestone, not only for Westhoff and our partners, but our entire industry, as we move forward into a new era of plant breeding and variety development.”  

Maybe, maybe not. Aside from those specifically approved by USDA in January, “other petunia varieties developed using genetic engineering remain subject to APHIS regulation,” according to a USDA April update. So, four years after Petunia-Gate, the approval process remains unnecessarily cumbersome and expensive. 

USDA’s January 2021 announcement listed the hoops that the petunias’ developers and regulators had to jump through: a 50-page petition from the plant breeder; a 65-page environmental assessment by USDA; a 15-page “Finding of No Significant Impact” on the environment, prepared by USDA to satisfy the National Environmental Policy Act; a 34-page “Plant Pest Risk Assessment” from USDA; and finally, a mercifully short “Final Determination of Nonregulated Status.”  

Note that none of these would be required for new varieties of petunias created with less precise, less predictable, “conventional” (that is, pre-molecular) techniques of genetic modification, as has been done by plant breeders and home gardeners for centuries.

How did this situation come about?  

Creating unnecessary bureaucracy

In short, petunias became submerged in the federal government swamp. Frank Young, the Food and Drug Administration Commissioner during the George H.W. Bush Administration, used to quip, “Dogs bark, cows moo, and regulators regulate.” Nowhere has that tendency toward bureaucratic expansionism and empire-building been more evident than in the creation of biotechnology regulations, which began to proliferate during the 1980s. 

In an attempt to curb federal agencies’ temptation to create superfluous new bureaucracies to regulate the molecular techniques of genetic engineering, the White House Office of Science and Technology Policy in 1992 published what became known as the “Scope Document” to establish narrowly targeted, risk-based oversight by various federal departments and agencies. The essence of the Scope Document:

  • “A risk-based, scientifically sound approach to the oversight of planned introductions of biotechnology products into the environment that focuses on the characteristics of the biotechnology product and the environment into which it is being introduced, not the process by which the product is created”;
  • “Exercise of oversight in the scope of discretion afforded by statute should be based on the risk posed by the introduction and should not turn on the fact that an organism has been modified by a particular process or technique”; and
  • “Oversight will be exercised only where the risk posed by the introduction is unreasonable, that is, when the value of the reduction in risk obtained by additional oversight is greater than the cost thereby imposed. The extent and type of oversight measure(s) will thus be commensurate with the gravity and type of risk being addressed, the costs of alternative oversight options, and the effect of additional oversight on existing safety incentives.”

These principles were eminently reasonable, scientifically defensible and risk-based. As early as 1989, there had been a widespread and long-standing consensus in the scientific community that the molecular genetic engineering techniques are more precise and predictable than older, conventional techniques. Although there is no rationale for case-by-case reviews of genetically engineered products of de minimis risk, USDA has continued that costly and dilatory policy. 

USDA under the microscope

The saga of USDA’s regulation of new plant varieties (such as genetically engineered petunias) illustrates what happens when unscientific, excessive regulation is mixed with bureaucratic empire-building. An example is USDA’s response to my op-ed, “Attack of the Killer Petunias,” written during the height of 2017 fiasco, which represents the culmination of a kind of bureaucratic trifecta surrounding the regulation of genetically engineered plants. 

screenshot pm

The three elements represent the apotheosis of cluelessness and incompetence: the flawed policy itself, which categorically ignores the principles articulated in the Scope Document; the 2017 enforcement action in response to the distribution of unapproved petunias, pursuant to that policy; and a letter from USDA to the editor of the Wall Street Journal.

My op-ed described how the policy, combined with bureaucratic injudiciousness at APHIS, gave rise to an utterly asinine decision: USDA regulators demanding the destruction of vast numbers of at least 50 varieties of strikingly beautiful, vivid-hued petunias—not because they posed any sort of danger to health or the natural environment, but because they were technically in violation of unscientific, misguided, 30-year-old government regulations. The flowers, you see, were crafted with modern, molecular genetic engineering techniques.

Mind you, by 2017, those petunias, developed more than two decades earlier, had been sold unnoticed and uneventfully for many years. Their pedigree was only serendipitously discovered by a Finnish plant scientist who noticed them in a planter at a train station in Helsinki, Finland, and became curious about their unusual color. (He picked one and confirmed in his own laboratory that they were, indeed, genetically engineered, and tipped off Finnish regulators, who then spread the word internationally.)

Teemu Teeri, the Finnish plant scientist who first discovered the gene-edited petunias. Credit: University of Helsinki

Why was there any concern about what anti-biotechnology critics dubbed ‘killer petunias’? A brief primer on regulation can help us understand how this absurd situation came about. APHIS had for decades regulated the importation and interstate movement of organisms (plants, bacteria, fungi, viruses and others) that are plant pests, which were defined by means of an inclusive list—essentially a binary “thumbs up or down” approach. A plant that an investigator wished to introduce into the field was either on the prohibited list of plant pests, and therefore required a permit, or it was exempt. (Petunias are not on the list.)

This straightforward approach is risk-based, in that the organisms required to undergo case-by-case governmental review are an enhanced-risk group (organisms that can injure or damage plants), as opposed to organisms not considered to be plant pests.  But since the 1980s, this process has had an evil twin: a regime focused exclusively on plants altered or produced with the most precise genetic engineering techniques if they contain even a snippet of DNA from a plant pest. 

The original concept of a plant pest, something known to be harmful, has been tortured into a new category: a “regulated article.” For decades, virtually every plant modified with molecular techniques has met this definition and been required to undergo a lengthy case-by-case review, regardless of potential risk. (After many thousands of expensive, dilatory reviews, not a single one has been found to be a plant pest.)

Under this paradigm, which only an empire-building regulator could love, the genetically engineered petunia varieties, with names like Trilogy Mango, Trilogy Deep Purple, and African Sunset, became “regulated articles” because they contain a non-infectious, tiny snippet of DNA from cauliflower mosaic virus, an organism officially classified as a plant pest.

Regulators’ zealotry boosts development costs and stifles innovation

Petunias are not the only GE flower to come face-to-face with the federal bureaucracy. The National Agriculture and Food Research Organization in Japan and the biotech company Suntory has invested billions of yen in the creation of true blue chrysanthemums, which are naturally pink or red. After conventional breeders spent decades failing to create blue blooms, The team engineered the blue chrysanthemums by inserting genes that stimulate the synthesis of blue pigment from two other blue-flowering plants: butterfly peas and Canterbury bells. The gorgeous innovation was hailed around the world—except by the USDA, which rejected their import in 2018.

Regulators have long since acknowledged that there’s no plausible expectation of harm to human health, animals or the environment from the chrysanthemums or petunias. They were targeted simply because they were created with highly precise molecular genetic engineering techniques, which requires a permit for their field testing and commercialization. 

Genetically engineered plants have faced unnecessary regulations imposed on harmless and even beneficial plants for more than 30 years. That comes with a price. APHIS’s regulatory hoops have pushed the cost of bringing a genetically engineered crop to market to an average of $136 million. And that doesn’t take into account the vast number of bureaucrat-hours expended on gratuitous paper-shuffling. That’s far too expensive to justify pursuing commercialization for an ornamental flower variety.  

The last element of the bureaucrats’ trifecta is particularly disturbing. Slightly over a month after my 2017 op-ed appeared, the Wall Street Journal received a request from USDA for a “correction.” I had suggested that instead of demanding that the illegal petunias be destroyed, the regulators should either ask distributors to donate the contraband flowers to pediatric cancer wards or simply announce that they would exercise “enforcement discretion” and forego enforcement actions entirely. That prompted this response:

APHIS never asked, much less ordered, anyone to destroy genetically engineered (GE) petunias. Rather, it is our understanding that certain distributors asked their downstream commercial customers to destroy plants. APHIS has no opposition to the industry’s decision to seek the voluntary destruction of these plants; however, APHIS did not request, nor recommend, that consumers destroy GE petunias in their possession, including any petunias already planted.

However, those claims are contradicted by USDA’s own published statements. For example:

  • According to a “news item” published by USDA [now deleted from the USDA website], “APHIS is working in close cooperation with breeders and growers represented by the American Seed Trade Association (ASTA) and AmericanHort to ensure the implicated GE petunia varieties are withdrawn from distribution.”
  • In another published document, “APHIS Guidance Regarding the Destruction of Potential Genetically Engineered Petunias,” USDA made its position clear: “The Plant Protection Act gives the United States Department of Agriculture (USDA), Animal and Plant Health Inspection Service (APHIS) the authority to regulate genetically engineered (GE) organisms that may present a plant health risk, referred to as ‘regulated articles’. APHIS’ Biotechnology Regulatory Services (BRS) regulates the introduction–meaning the importation, interstate movement, and environmental release–of GE organisms that may pose a pest risk to plants under APHIS’ biotechnology regulations at 7CFR part 340…BRS has learned that GE petunias have been imported, distributed, and grown in the United States without appropriate authorization. GE petunias are regulated articles.” That document lists the names of 50 petunia varieties (as of that date) that fall “under regulatory authority of CFR part 340.”

Here’s the kicker from that second document:

This document serves as guidance to industry regarding how to destroy GE and potential GE petunias consistent with the regulations at 7CFR part 340. 

Any of the following methods may be used to destroy potential GE petunia plants if no seed [sic] are present. . . [emphasis added]

Those specified methods of destruction—for petunias that everyone agreed were completely harmless, mind you—include double-bagging and incinerating or “burial under a minimum of one (1) foot of soil.” And if you have seeds, you can grind them into oblivion.

Moreover, contrary to APHIS’s letter to the Wall Street Journal, nowhere did my op-ed make any mention of “consumers destroy[ing] GE petunias in their possession” (emphasis added).

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Thus, USDA’s own published statements clearly contradict their complaints. Perhaps the officials who composed it were trying to avail themselves of the old inside-the-Beltway maxim that something said three times becomes a fact.

‘Bureaucratic self-interest, stupidity, and dishonesty’

The epilogue to this story is that the USDA has now approved or “deregulated” the petunias, after four years of superfluous jumping through hoops by the petunias’ breeder and USDA bureaucrats. 

From beginning to end, this sorry saga of bureaucratic self-interest, stupidity, and dishonesty exemplifies the fetid swamp of federal regulation. It also illustrates the need to replace USDA’s technique-focused approach to genetically engineered plants with one that is scientifically defensible and risk-based, proposals for which have been made by academics for decades (such as here and here).  

As addressed in my WSJ op-ed, the technique-based regulatory net—which has been in place for more than 30 years—makes no sense and affords no protection against anything, because plants that need to be reviewed are regulated elsewhere in APHIS under the long-standing jurisdictions of the Plant Protection Act and the Noxious Weed Act.

In other words, “regulated article” is a red herring that should be gutted and discarded. Much like APHIS’ Biotechnology Regulatory Services bureaucracy itself.

Acknowledgement: The author would like to thank Jon Entine for his numerous excellent suggestions on the text. This article was updated on May 26, 2021 to correct and update the regulatory context of GMO chrysanthemums.

Henry I. Miller, a physician and molecular biologist, is a senior fellow at the Pacific Research Institute. He was the founding director of the FDA’s Office of Biotechnology. Find Henry on Twitter @henryimiller

COVID and immunity: How long will it last if you get a vaccine or were infected?

The COVID vaccines are working. Data from Israel and Scotland shows that they are protecting people and may also be decreasing the spread of the SARS-CoV-2 virus. If it all holds up, people will be protected from severe disease, the amount of virus will progressively decrease, and we can truly plan for a way out of the pandemic.

Evidence is also growing that once you’ve been infected, there is a pretty good chance that you will be protected from further infections, or at the very least, have less severe disease. This makes sense, as it’s why your immune system evolved in the first place.

Over millions of years, the immune system was sculpted by the laws of natural selection. Once you’ve been infected or vaccinated, memory B and T cells persist. When you are reinfected, they wake up and eliminate the infection to such an extent that you won’t even feel sick. You can see how this made evolutionary sense. Feeling sick means you’re less likely to procreate, so there would have been a survival advantage to immunity.

However, an important question in immunology, when it comes to infectious diseases and vaccines, is: how long protection might last? There are several variables here, from the type of pathogen infecting you, to how bad the initial disease is, to your overall health, and your age. All of this makes predicting what might happen with COVID challenging.

Measles or flu?

It can be useful to compare what we currently know about COVID-19 to two diseases that we know an awful lot about and for which we have effective vaccines: measles and flu. In the future, which one will COVID look like?

Measles is a much more stable virus than SARS-CoV-2 – it doesn’t mutate very much. It also usually provokes a strong immune response and so immunity tends to last a long time, either from infection or vaccines. One study showed that antibodies against measles last a lifetime. Because it is such a stable virus, when it reinfects, the vaccine you might have had years ago, or the response to the natural infection, will protect you.

Influenza, however, is different. The flu virus can change with ease. This means that we must keep vaccinating against it, as vaccines to a previous variant may not protect against a new variant. The stability of a virus is therefore a key determinant of whether protection persists.

COVID-19 probably sits somewhere between measles and flu. It’s not as stable as measles and it is not as changeable as flu. We might expect immunity to last against COVID-19, but probably not as long as measles. And we’ll probably have to vaccinate against variants that emerge, as we do for flu.

One thing we’ve got going for us is the repetitive nature of the surface of SARS-CoV-2. The spike protein covers the surface of the coronavirus in a fairly uniform manner. Antibodies to smallpox, which also has a highly repetitive surface, last a lifetime. In this situation, macrophages (a type of white blood cell that engulfs and consumes pathogens) might be better able to gobble up the antibody-coated virus.

If the spike protein mutates and antibodies can’t bind as well, it’s well worth giving a booster shot against the new spike protein — which is what is planned.

SARS-CoV-2 has a fairly uniform array of spike proteins on its surface. Credit: Wikimedia Commons

And if antibodies don’t work as well against variants, T cells might. This might mean that we won’t need boosters at all and that we might have long-term protection against multiple variants. And even if the immune response is lower against variants, we will probably still be protected from severe disease.

One important aspect of natural infection is how strong the initial immune response is. The common cold often only provokes a mild response in the upper airways. This is because a virus that limits itself to your nose isn’t much of a threat. It means that the immune response doesn’t really get going at all. It’s insufficient for memory B and T cells to emerge.

If flu causes a big fight, which the immune troops never forget, the common cold is a mere skirmish that is soon forgotten. A mild dose of COVID might be similar. If you’ve had a more pronounced disease, this might stand you in good stead and make you more resistant to reinfection. But if you only had mild disease, or if you stayed symptom free, you are at risk of reinfection.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Where vaccines come into their own, is their power. They usually give a much stronger immune response than natural infection. This is because natural immune responses lack the guile of the pathogen, many of which have elaborate ways to turn off the immune response. This again is down to evolution.

Viruses that carry proteins that can limit immunity will be more likely to survive. This may be especially important with SARS-CoV-2, which carries several ways to limit immunity. Because the vaccines comprise either one part of the virus – such as the spike protein – or a whole inactivated virus, they don’t limit immunity, and so a robust immune response occurs. The Moderna vaccine, for example, has been shown to provoke a durable antibody response, where the natural infection is more variable.

We are now confident that infection with SARS-CoV-2 is highly likely to provide some protection against reinfection. But, given that we are seeing variants, it is wise to prepare for booster shots with new vaccines for those who are vulnerable. We will get a better idea of whether they are needed in the coming months.

If they are, COVID will be more like flu, which needs boosters. But if they’re not, then it will be more like measles, where the only threat is to those who refuse vaccination.

Luke O’Neill is an immunologist in the School of Biochemistry and Immunology at Trinity College Dublin. His research area is the molecular basis of inflammatory diseases. He has written several popular science books including his most recent ‘Never Mind the B#ll*cks, Here’s the Science’ published by Gill. Find Luke on Twitter @laoneill

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

Why do some animals live extraordinarily long lives — and can humans benefit from studying them?

Life, for most of us, ends far too soon — hence the effort by biomedical researchers to find ways to delay the aging process and extend our stay on Earth. But there’s a paradox at the heart of the science of aging: The vast majority of research focuses on fruit flies, nematode worms and laboratory mice, because they’re easy to work with and lots of genetic tools are available. And yet, a major reason that geneticists chose these species in the first place is because they have short lifespans. In effect, we’ve been learning about longevity from organisms that are the least successful at the game.

Today, a small number of researchers are taking a different approach and studying unusually long-lived creatures — ones that, for whatever evolutionary reasons, have been imbued with lifespans far longer than other creatures they’re closely related to. The hope is that by exploring and understanding the genes and biochemical pathways that impart long life, researchers may ultimately uncover tricks that can extend our own lifespans, too.

Everyone has a rough idea of what aging is, just from experiencing it as it happens to themselves and others. Our skin sags, our hair goes gray, joints stiffen and creak — all signs that our components — that is, proteins and other biomolecules — aren’t what they used to be. As a result, we’re more prone to chronic diseases such as cancer, Alzheimer’s and diabetes — and the older we get, the more likely we are to die each year. “You live, and by living you produce negative consequences like molecular damage. This damage accumulates over time,” says Vadim Gladyshev, who researches aging at Harvard Medical School. “In essence, this is aging.”

This happens faster for some species than others, though — the clearest pattern is that bigger animals tend to live longer lives than smaller ones. But even after accounting for size, huge differences in longevity remain. A house mouse lives just two or three years, while the naked mole rat, a similar-sized rodent, lives more than 35. Bowhead whales are enormous — the second-largest living mammal — but their 200-year lifespan is at least double what you’d expect given their size. Humans, too, are outliers: We live twice as long as our closest relatives, the chimpanzees.

Bats above average

Perhaps the most remarkable animal Methuselahs are among bats. One individual of Myotis brandtii, a small bat about a third the size of a mouse, was recaptured, still hale and hearty, 41 years after it was initially banded. That is especially amazing for an animal living in the wild, says Emma Teeling, a bat evolutionary biologist at University College Dublin who coauthored a review exploring the value of bats in studying aging in the 2018 Annual Review of Animal Biosciences. “It’s equivalent to about 240 to 280 human years, with little to no sign of aging,” she says. “So bats are extraordinary. The question is, Why?”

There are actually two ways to think about Teeling’s question. First: What are the evolutionary reasons that some species have become long-lived, while others haven’t? And, second: What are the genetic and metabolic tricks that allow them to do that?

Answers to the first question, at least in broad brushstrokes, are becoming fairly clear. The amount of energy that a species should put toward preventing or repairing the damage of living depends on how likely an individual is to survive long enough to benefit from all that cellular maintenance. “You want to invest enough that the body doesn’t fall apart too quickly, but you don’t want to over-invest,” says Tom Kirkwood, a biogerontologist at Newcastle University in the UK. “You want a body that has a good chance of remaining in sound condition for as long as you have a decent statistical probability to survive.”

The greater mouse-eared bat, Myotis myotis, lives more than five times as long as a typical mammal of its size. Bats are exceptionally good at preventing molecular damage from accumulating, researchers are learning. Credit: Roberto_sindaco

This implies that a little scurrying rodent like a mouse has little to gain by investing much in maintenance, since it will probably end up as a predator’s lunch within a few months anyway. That low investment means it should age more quickly. In contrast, species such as whales and elephants are less vulnerable to predation or other random strokes of fate and are likely to survive long enough to reap the benefits of better-maintained cellular machinery. It’s also no surprise that groups such as birds and bats — which can escape enemies by flying — tend to live longer than you’d expect given their size, Kirkwood says. The same would apply for naked mole rats, which live their lives in subterranean burrows where they are largely safe from predators.

But the question that researchers most urgently want to answer is the second one: How do long-lived species manage to delay aging? Here, too, the outline of an answer is beginning to emerge as researchers compare species that differ in longevity. Long-lived species, they’ve found, accumulate molecular damage more slowly than shorter-lived ones do. Naked mole rats, for example, have an unusually accurate ribosome, the cellular structure responsible for assembling proteins. It makes only a tenth as many errors as normal ribosomes, according to a study led by Vera Gorbunova, a biologist at the University of Rochester. And it’s not just mole rats: In a follow-up study comparing 17 rodent species of varying longevity, Gorbunova’s team found that the longer-lived species, in general, tended to have more accurate ribosomes.

The proteins of naked mole rats are also more stable than those of other mammals, according to research led by Rochelle Buffenstein, a comparative gerontologist at Calico, a Google spinoff focused on aging research. Cells of this species have greater numbers of a class of molecules called chaperones that help proteins fold properly. They also have more vigorous proteasomes, structures that dispose of defective proteins. Those proteasomes become even more active when faced with oxidative stress, reactive chemicals that can damage proteins and other biomolecules; in contrast, the proteasomes of mice become less efficient, thus allowing damaged proteins to accumulate and impair the cell’s workings.

This Laysan albatross is at least 69 years old, making her the world’s oldest known bird. In November 2020, she laid an egg in her nest at Midway Atoll in the Pacific Ocean, suggesting that she’s aging gently. Credit: Weedmandan/Birdshare.

DNA, too, seems to be maintained better in longer-lived mammals. When Gorbunova’s team compared the efficiency with which 18 rodent species repaired a particular kind of damage (called a double-strand break) in their DNA molecules, they found that species with longer lifespans, such as naked mole rats and beavers, outperformed shorter-lived species such as mice and hamsters. The difference was largely due to a more powerful version of a gene known as Sirt6, which was already known to affect lifespan in mice.

Watching the “epigenetic clock”

But it’s not just the genes themselves that suffer as animals age — so does their pattern of activation. An important way that cells turn genes on and off at the right time and place is by attaching chemical tags called methyl groups to sites that control gene activity. But these tags — also known as epigenetic marks — tend to get more random over time, leading gene activity to become less precise. In fact, geneticist Steve Horvath of UCLA and his colleagues have found that by assessing the status of a set of almost 800 methylation sites scattered around the genome, they can reliably estimate an individual’s age relative to the maximum lifespan of its species. This “epigenetic clock” holds for all the 192 species of mammals that Horvath’s team has looked at so far.

Notably, the epigenetic marks of longer-lived mammals take longer to degrade, which presumably means that their genes maintain youthful activity longer. In bats, for example, the longest-lived bats often have the slowest rate of change in methylations, while shorter-lived species change more quickly (see diagram).

As he digs deeper, Horvath is finding that certain methylation sites may predict a species’ lifespan regardless of the age at which he samples them. “To me, this is a miracle,” he says. “Let’s say you go into the jungle and find a new species — could be a new bat or any other mammal. I can tell you pretty accurately the maximum lifespan of the species.” The methylation clues also predict maximum lifespan for dog breeds, which may emerge as an important study organism for aging (see sidebar: “What Rover knows”). These lifespan-related methylations tend to be associated with genes related to development, Horvath finds, though more detailed connections have yet to be worked out. He hopes that this work, which is not yet published, can eventually point the researchers toward genes that are key for regulating lifespan and aging.

Improvements in molecular techniques are already giving researchers more powerful tools to tease out the ways in which extraordinarily long-lived organisms may differ from the ordinary. One promising technique involves sequencing not the DNA in cells, but the messenger RNA. Individual genes are copied into mRNA as the first step in producing proteins, so mRNA sequencing reveals which genes in the genome are active at any given moment. This profile — referred to as the transcriptome — gives a more dynamic view of a cell’s activity than just listing the genes in the genome.

Gladyshev’s team, for example, sequenced the transcriptomes of cells from the liver, kidney and brain of 33 species of mammals, then looked for patterns that correlated with lifespan. They found plenty, including differences in activity levels of many genes involved in cellular maintenance functions such as DNA repair, antioxidant defense and detoxification.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Other paths to old age

More recently, Teeling’s team studied Myotis myotis bats from five roosts in France for eight years, capturing each bat every year and taking small samples of blood for transcriptome sequencing. This allowed them to track how the bats’ transcriptomes changed as they aged and compare the process to that of mice, wolves and people — the only other species for which similar long-term transcriptome data were available. “As the bats age,” Teeling wondered, “do they show the same dysregulation that we would show as we age?”

The answer, it turned out, was no. Whereas the other mammals produced fewer and fewer mRNA molecules related to maintenance functions such as DNA repair and protein stability the older they got, the bats did not. Instead, their maintenance systems seemed to get stronger as they got older, producing more repair-related mRNAs.

Skeptics note that conclusive evidence is still lacking, because the presence of more mRNA molecules does not necessarily mean more effective maintenance. “It’s an important first step, but it’s only that,” says Steven Austad, a biogerontologist at the University of Alabama, Birmingham. Still, the fact that the analysis identified processes that were already linked to longevity, such as DNA repair and protein maintenance, suggests that other genes flagged by this method could be solid leads: “We could then go look at new pathways that we haven’t yet explored,” Teeling says. In particular, the team found 23 genes that become much more active with age in bats but less active in other mammals. They are now looking at these genes with great interest, in the hopes of discovering new levers to alter the course of aging.

One of the principles beginning to emerge from comparative studies of aging is that different species may follow different paths to longevity. All long-lived mammals need to delay the onset of cancer, for example. Elephants do this by having multiple copies of key tumor-suppressing genes, so that every cell has backups if one gene breaks during the wear and tear of life. Naked mole rats, on the other hand, gain cancer resistance from an unusual molecule involved in sticking cells together, while bowhead whales have amped up their DNA repair pathways.

Scientists estimate that the lifespan of bowhead whales is at least 200 years — much longer than expected, even given their size. One reason that they live so long is that they have unusually vigorous DNA repair processes, slowing the accumulation of damage in their genomes. Credit: WWF

Geroscientists tend to view this diversity of solutions as an aid in their quest, not a problem. “That makes our job more difficult, but actually more interesting,” says Austad. “By studying the diversity of ways to achieve slow aging and long life, I think we’re more likely to stumble on things that are more easily translated to humans.”

Can we live longer, healthier lives by learning how to be more like naked mole rats, bats and bowhead whales? Not anytime soon — but the early results from research on these animal Methuselahs show definite promise.

Bob Holmes is a science writer in Edmonton, Canada. Find him at his website.

A version of this article was originally published at Knowable Magazine and has been republished here with permission. Sign up for their newsletter here. Knowable can be found on Twitter @KnowableMag

glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists