Avian flu virus detected in raw milk from infected dairy cows. Are we in danger of it spreading to our food supply? 

screenshot pm

It was three months ago when we wrote about the surge of the H5N1 bird flu strain that had by then already killed tens of millions of birds in various parts of the U.S. and land and marine animals in other countries. Surprising experts, the flu jumped to cows and goats last month. And now it has been detected very high concentrations in raw milk from infected animals, the World Health Organization announced late last week. 

Although the finding surprised health officials, they say there is at present little concern that the infected raw milk will make it into the food supply. Dairies must destroy milk from sick cows, and it’s believed that pasteurization would kill the virus in milk from cows that have not yet been identified as ill. Federal officials are advising not to drink raw milk or eat raw milk-based cheese.

screenshot pm

Just a few days before, a man working on a Texas dairy farm was diagnosed with illness from the avian flu strain, “The case in Texas is the first case of a human infected by avian influenza by a cow,” said Wenqing Zhang, head of the global influenza program at the WHO. It’s only the second known case in the U.S. of a human contracting the disease.

Spread of H5N1 surprises disease experts

Infections by the current virus strain have been increasing since the 1990s as the world poultry population soared to meet escalating food demand. H5N1 avian flu claimed its first known human victims in 1996-97, in China and Hong Kong, spread to Cambodia in 2003, and then reappeared with a vengeance a decade ago.  

According to the WHO, it has killed nearly 60% of the more than 800 people infected between 2003 and 2016. The majority of human H5N1 infections and deaths occurred in Egypt, Vietnam, and Indonesia.

We wrote in January:

Despite limited examples of person-to-person transmission, there are no known examples of widespread, sustained transmission among humans or any mammals for that matter. However, virus evolution called “antigenic shift” could give rise to the emergence of novel viral subtypes able to target mammals.

As often happens in the infectious disease world, circumstances have changed dramatically in just a few months. The discovery of H5N1 bird flu in dozens of herds of dairy cows across the U.S. has sparked worry and a call for more transparency from the government — specifically, the USDA. This strain of bird flu, while not new, had never before been found in cattle. It has now affected herds in eight states, with some cows showing reduced milk production and discolored, viscous milk.

Scientists and public health experts are particularly concerned about two things: the risk of the virus spreading between cows and potentially mutating to readily infect humans (only one case has thus far been found in a dairy worker in Texas); and the paucity of detailed, timely information from the USDA regarding the outbreak.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Transparency concerns

Many experts have criticized the USDA for not being forthcoming with information about the outbreak. Concerns include:

  • How the virus is spreading between cow herds — through cattle movement, contaminated feed or milking machines, or wild birds?
  • Whether the outbreaks in different herds are connected. Are they all linked to a single source in Texas, or are there independent outbreaks happening – perhaps via a new strain of H5N1 in wild birds?
  • The effect of pasteurization on H5N1. Currently, farmers are being told to discard milk from infected cows, although the USDA, the FDA, and the CDC all say they believe pasteurization would kill the viruses. However, that is based on work previously done on other pathogens.

This lack of transparency is hindering the science community’s efforts to understand the outbreak and develop effective control measures. Michael Osterholm, an expert in infectious diseases, argues that clear communication is crucial for managing outbreaks successfully.

“They are creating the perception that something is happening or not happening that would not meet with the public’s approval,” Osterholm said. “And this is really unfortunate. There’s no evidence here that there’s some kind of a smoking gun, that somebody did something wrong. Just tell us what you’re doing. And that’s not happening.”

Risk of spread

There is reason for grave concern if not (yet) panic. As we related in our January article:

The deadliest recent twist is the spread of H5N1 to mammals. Brazil recently reported the deaths of more than 900 seals and sea lions, and thousands more were found dead last summer in Chile and Peru.

Washington State is on alert after dozens of seals showed up dead off the Olympic peninsula, alarming scientists. H5N1 has also infected large numbers of foxes, raccoons, skunks, grizzly bears and dolphins. Local authorities in all of these areas are scrambling to contain its spread, and warning humans not to touch the dead animals.

The H5N1 virus has the potential to mutate and increase its host range. Also, during the past few months, Texas reported its detection in cats from several dairy farms experiencing H5N1 outbreaks in dairy cows.  It is unknown whether the virus spread to the cats from affected dairy cows, raw cow milk, or wild birds associated with those farms.

Public health authorities are concerned that the virus could mutate and become capable of human-to-human transmission, which would be disastrous since it has a case-fatality rate of over 50% in humans.

Another concern is that pigs, which can be infected by both avian and human flu, could be infected simultaneously (coinfected) by pig and human viruses.  That could lead to reassortment of portions of the viruses’ genomes, which could give rise to a new strain more transmissible to people.

Looking for answers

USDA’s frequently asked questions document, posted on its website, offers much information about what is currently known, recommended, or underway.

screenshot pm

  • Tests so far indicate that the virus detected in dairy cows is …the same clade [i.e., variant] that has been affecting wild birds and commercial poultry flocks and has caused sporadic infections in several species of wild mammals, and neonatal goats in one herd in the United States.”
  • The spread of the H5N1 virus within and among herds indicates that bovine to bovine spread occurs, likely through mechanical means.  As a result, we are encouraging producers and veterinarians to minimize dairy cattle movement.” 
  • Unlike in poultry flocks where H5N1 is fatal, among the dairies whose herds are exhibiting symptoms, the affected animals have recovered with little to no associated mortality reported.”
  • Based on information available at this point [April 16], we do not anticipate that this [outbreak] will impact the availability or the price of milk or other dairy products for consumers.”
  • Recent detections of H5N1 in poultry have slowed.  As of April 15, 2024, there have been 26 detections of H5N1 in commercial poultry facilities in 2024, which is like the number in January-April of 2023 (19 detections).  Both years are showing significant decreases in the number of detections compared to 2022, when we saw 165 detections in the January-April period.”
  • At this stage, we do not anticipate the need to depopulate [i.e., cull] dairy herds.  Unlike HPAI (H5N1) in birds which is typically fatal, little to no mortality has been reported and the animals are reportedly recovering.  The affected cows on the dairy farms are currently being isolated from other animals.”
  • It is noteworthy that avian flu has been detected only in dairy herds but not beef cow herds.
  • FDA’s longstanding position is that unpasteurized, raw milk can harbor dangerous microorganisms that can pose serious health risks to consumers, and FDA is reminding consumers of the risks associated with raw milk consumption in light of the H5N1 detections.”

The current, precarious situation highlights the importance of the rapid accumulation and promulgation of information – that is, transparency — in the management of outbreaks. USDA’s sharing information freely and quickly will allow scientists, public health officials, and farmers to work together to assess and reduce risks. 

Henry I. Miller, a physician and molecular biologist, is the Glenn Swogger Distinguished Fellow at the American Council on Science and Health. He was the founding director of the FDA’s Office of Biotechnology. Find Henry on X @HenryIMiller

Kathleen L. Hefferon is an instructor in microbiology at Cornell University. Find her on X @KHefferon

How microplastics impact our food and our health

f f cc b
Small pieces of plastic, now termed microplastics have infiltrated all ecosystems, posing a severe threat to wildlife…and now us. New research has shown that microplastics — especially its microscopic offspring, nanoplastics — might accumulate within our bodies, too.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Microscopic fibers with massive implications

Microplastic particles measure less than 5mm in size, or smaller than the width of a pencil eraser.

image xHow do these plastics find their way inside us? I’ve never been caught in a hailstorm of plastic beads (and you probably haven’t either). Unfortunately, what we’re talking about here is something smaller…way smaller.

We’re talking about nanoplastics. Fibers that are smaller than 1 micrometer (1 μm), or the length of a tiny bacterium, or 1/50 the width of a strand of human hair.

Despite its seemingly inconsequential size, nanoplastics pose significant risks.

These barely detectable yet ever-present fibers can pass through biological barriers, like blood and organ lining and, over time, accumulate within the body.

Where do the fibers come from?

imageMicroplastics, including nanoplastics, are ubiquitous because they’re durable and resist decomposition. They are primarily generated through the breakdown of oversized plastic items and fabrics, microbeads in personal care products, and a host of other industrial processes.

The Organization for Economic Cooperation and Development (OECD) held a seminar based on reporting from the government of Sweden that found synthetic textiles as the single greatest contributor to engineered microplastics in the ocean, accounting for 35% of total microplastic volume.

Polyester, nylon, and acrylic – common fabrics used to make 60% of the world’s clothes — are all considered synthetic.

Unfortunately, our typical shopping habits are mostly to blame here, with synthetic fabrics and toiletries making up almost 40% of the total microplastic volume.

These plastic-based fibers shed microplastics every step along the way, from its production, to wearing and laundering, and even during its eventual disposal, mostly in landfills. In fact, a 2016 study found that each laundering of a fleece jacket releases an average of 1.7 grams of microfibers, which can end up in the ocean. Nylon, polyester and acrylic clothes all shed microfibers when washed.

image x

Tires are next in line as significant sources of microplastics, followed by city dust. While you’ll find a greater concentration of microplastics around densely populated areas with heavy traffic, industrial activity, and busy commerce, these tiny particulates are adept at world travel.

In fact, scientists recorded 365 microplastic particles per square meter falling daily from the sky in the remote Pyrenees Mountains in southern France.

The path from environment to food

One of the most alarming aspects of microplastic contamination is its presence in what we eat. Microplastics have been found in a wide range of whole foods, including seafood, fruits, vegetables, honey, and bottled drinking water.

Microplastic entry into our food system mostly happens through these channels:

  • Ingestion by animals and seafood that we eventually eat (“trophic transfer”)
  • Soil and plants absorbing degraded fibers from synthetic mulches and films (plastic bags, for example)
  • Airborne fibers that, once settled, are ingested or absorbed by trophic transfer
  • Food processing and packaging along all points, from the industrial food and drink facilities, to chopping on our polyethylene cutting boards at home

Health risks from ingesting plastics

Several studies have pointed out the adverse effects in various parts of our bodies, including:

image x

“Our research shows that we are ingesting microplastics at the levels consistent with harmful effects on cells, which are in many cases the initiating event for health effects.”

–        Evangelos Danopoulos, Hull York Medical School, U.K.

Microplastic release using microwaves

A recent and particularly frightening study from University of Nebraska demonstrates microwaving’s effect on plastics, compounding concerns found in previous studies.

The issue comes down to the structure of plastic during production. Simply put, particles look and behave like cooked spaghetti. You know how cooked spaghetti clumps together when cooling down, but then starts releasing strands when reheated? Those little spaghetti-like plastic structures are released into our foods when plastic gets hot in the same way.

But what about plastic containers that read “microwave-safe”? Perhaps they’re not so safe after all. This study found that heat from the microwave can cause plastic containers to break down, releasing small plastic particles into the food or beverage being heated. And not just a few particles: some containers could release as many as 4 million microplastic and 2 billion nanoplastic particles from only one square centimeter of plastic area within three minutes of microwave heating.

image x

But it doesn’t stop at microwaves:

  • Cooking food in plastic containers or using plastic utensils in hot foods can also release microplastics and nanoplastics
  • Refrigeration and room-temperature storage for over six months can also release millions to billions of microplastics and nanoplastics
  • Polyethylene food pouches commonly used for kids’ applesauce, yogurts, and smoothies, released more particles than polypropylene plastics, often used for refrigerated storage containers and restaurant take-out orders

Separately, the researchers also found that microplastics released from plastic containers caused the death of 77% of human embryonic kidney cells. However, more research needs to be done on this to be conclusive, as this was a first-time in-vitro (i.e. test tube) study.

What can we do?

Yes, this information is scary, but don’t fear…we have an incredible food system providing us all with fresh and affordable food choices every day. And plastics do have their place in this system: they reduce food waste by keeping items fresher longer, avoiding cross contamination, and keeping food prices low.

The most important thing you can do to help offset plastics’ negative effects? Plain and simple: eat a balanced diet. Consuming a variety of fresh produce, lean proteins, and healthy fats is the most efficient way to promote healthy digestion, flush toxins from organsboost cellular activity, and initiate an effective immune response. And, coincidentally, fresher food choices usually have less plastic packaging than their shelf-stable counterparts.

And here are some other things to implement into your daily life.

  • Avoid microwaving plastic by using microwave-safe glass or ceramic containers instead
  • Consider a time-restricted eating schedule that provides your body with a daily rest from digestion so your organs can operate better and with less inflammation
  • Eat foods high in antioxidants, chlorella, and selenium. These nutrients bind to toxins for removal from your digestive system.
  • Limit premade meals packaged with plastic and that require heating in their container(microwave foods in glass containers instead of plastic ones)
  • Curb consumption of bivalves like oysters, clams, and mussels. When eating these shellfish, you also consume their digestive systems, which harbor more plastics than foods from anywhere else.
  • Reduce plastic use by selecting safer materials, like glass or stainless steel
  • Bring a reusable cup when going to the coffee shop, the gym, work, etc.
  • Filter your tap water to reduce your exposure to microplastics. And don’t drink water from plastic water bottles
  • Reduce canned food purchases since they have thin plastic linings and hold food for extended periods of time

The bottom line

The pervasive presence of microplastics in our food and their release during heating processes like microwaving are concerning issues that demand immediate attention. While the full extent of the health risks associated with ingesting microplastics is still under investigation, reducing plastic use and practicing safe food handling can help mitigate these risks. As consumers, we must remain vigilant and support ongoing research to fully understand and address the impacts of microplastic contamination on our health and the environment.

Hillary E. Kaufman studied at North Eastern University and manages the Dirt To Dinner website. Follow Hillary on Linkedin

A version of this article was originally posted at Dirt To Dinner and has been reposted here with permission. Any reposting should credit the original author and provide links to both the GLP and the original article. Find Dirt To Dinner on X @Dirt_To_Dinner

GLP Podcast: Anti-vax doctor claims COVID vaccines ‘shed’; Abandon milk and meat for the environment?

v facts and fallacies cameron and liza default featured image outlined
Do mRNA COVID vaccines “shed” particles? This anti-vax myth rears its ugly head yet again. Are milk and meat alternatives better for you? Are they better for the environment? Here’s why we need to do more research — and weigh nutrition losses against environmental impact.

Podcast:

Join hosts Dr. Liza Dunn and GLP contributor Cameron English on episode 262 of Science Facts and Fallacies as they break down these latest news stories:

Vaccines have been maligned for years — and a common myth is making the rounds again. An anonymous antivax quack who goes by the name “A Midwestern Doctor” claims that mRNA COVID vaccines “shed”, potentially endangering patients. Vaccine shedding, or viral shedding, happens when a vaccine releases parts of a virus, resulting in the spread of an infection. Has this mystery doc uncovered a secret public health officials don’t want you to know — or are they fanning fears about something that “for all intents and purposes is impossible?” In an abundance of caution, live attenuated virus vaccines (shots that use a weakened version of a virus to train the immune system) are not recommended for severely immunocompromised people (and sometimes family members) because they may cause infection. However, mRNA vaccines are not LAVs. The Pfizer-BioNTech and Moderna COVID vaccines are mRNA vaccines, while the Novavax COVID vaccine is a protein subunit vaccine. None of the COVID vaccines cause shedding. Let’s go through the science one more time.

From scientists to journalists to public health officials, it’s become axiomatic that eating less meat is good for you and the environment. But increasing data, including a new Canadian study, throws cold water on that belief. Meat alternatives can reduce carbon footprint by half, but plant-based milk options are much less impactful — and may have real consequences for our nutrition. Journalist Emma Bryce explains why we need more data — and why this debate is far more nuanced than even many experts appreciate.

Dr. Liza Dunn is a medical toxicologist and the medical affairs lead at Bayer Crop Science. Follow her on X @DrLizaMD

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Visit his website and follow him on X @camjenglish

Viewpoint: The organic food industry is a $180 billion marketing fraud

eaeb d b
As a biomedical scientist, it has never failed to annoy me that the term ‘organic’ has been co-opted to spread misinformation. Before we get into the topic as it relates to foods (and other consumer products), I just want to emphasize that the term organic in chemistry has a VERY different meaning. And we will talk a lot more about that in the future!

andrea love headshot
Dr. Andrea Love

But since the EWG is out there yet again circulating their fear-mongering “Dirty Dozen” list, it bears explaining what organic actually means.

The organic foods industry is a $181.5 BILLION dollar industry as of 2022, with an expected annual growth rate of 11.2% year over year. This industry didn’t even exist until 2002, and was borne out of consumer demand and misinformation. For context, it was worth 26.7 billion dollars in 2010. That is a huge amount of growth for a market that has zero science behind it.

Most people have been misled to believe that organic is superior and that’s not your fault. Even the American Academy of Pediatrics has figureheads spouting that lie. Of course people wouldn’t spend more money for something that’s equivalent, so clever messaging is used to insinuate that organic foods are superior. And that is reflected in consumer perception. The number one reason that people opt to purchase organic foods is because they believe that organic foods are healthier, safer, more nutritious, or otherwise superior.

The reality? Organic foods are not superior, just more expensive

The EWG and other organic activists are deliberately trying to spread misinformation in order to drive people to purchase organic produce, which is on average, around 50% more expensive than conventional counterparts.

Organic food is only 5-7% more expensive to produce, so the difference in price is pure profit, and as a result, organic food is almost synonymous with luxury and privilege. Organic farming is at least 22-35% more profitable than conventional agriculture, especially when factoring in the labeling “markup” that is often default: upwards of 30% more on the price tag and hitting your wallet.

All organic means in the United States is that produce (or crops) in question are certified to have been grown on soil that had no prohibited substances applied for three years prior to harvest.

So what counts as prohibited substances? Certain synthetic chemicals. Organic farming also prohibits the use of genetically engineered seeds in cultivation. But organic farming uses PLENTY of pesticides – they just have a specific list that they deem appropriate. (I’ll discuss myths about livestock and animal products in a subsequent post)

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

One of the biggest misconceptions about organic products are that they are pesticide-free. This is false

Organic farming uses plenty of pesticides and fungicides. A Soil Association survey demonstrated that 95% of organic food consumers said their top reason was to avoid pesticides. Sorry to burst the bubble here, folks.

Organic pesticides are merely pesticides that remain chemically unaltered from the chemical state derived from nature. Before you fall into the appeal to nature fallacy trap, remember that the suffix ‘-cide’ means “to kill”. It doesn’t matter whether a pesticide is a natural chemical or a synthetic chemical: they all kill certain things at certain exposures. Remember: the dose makes the poison.

Synthetic pesticides are chemicals that are produced from chemical alteration. But the source of a chemical has zero bearing on its potential harm or safety. and In reality, some naturally procured pesticides are deadlier or carry a higher risk than synthetic options. Remember: plants produce lots of noxious chemicals to deter predators from eating them.

Pesticides and herbicides added to crops reduce exposure to and damage by unwanted insects, bacteria, fungi, and weeds.

If we did not utilize pesticides for agriculture, yields of farm crops would be impacted, cost of food goods would skyrocket, and we would not be able to feed the 8.1 billion people on the planet. Organic farming uses 84% more land for the same yield, but yields are 55% lower by area than conventional.

Just because something is labeled organic or natural does not mean it is safer to the homeowner or unable to cause harm to the environment. Botanically derived pesticides are not always safer; in fact, some can be more dangerous. — Chris Enroth

There are over 20 chemicals commonly used in the growing and processing of organic crops that are approved by the US Organic Standards. But the volume of chemicals used in organic farming aren’t recorded or monitored, even though pesticides deemed “organic” are generally less effective, so require larger volumes for similar effectiveness.

According to the National Center for Food and Agricultural Policy, the top two organic fungicides, copper and sulfur, were used at a rate of 4 and 34 pounds per acre in 1971. In contrast, the synthetic fungicides only required a rate of 1.6 lbs per acre, from 2.5X to 20X less than the amount of the organic alternatives. More than that, many of these organic pesticides are more toxic (when looking at LD50 values), especially when used at the higher levels required for adequate control.

LD50 values are a way we can measure toxicity – it refers to the dose of something at which 50% of the test group dies – the 50% lethal dose. Lower LD50 values means something has greater toxicity. Remember: you can’t simply say something is toxic – the dose makes the poison. Toxicity includes dosage, which is often normalized to the size of a given organism too.

Natural pesticides refer to products that are derived strictly from sources in nature with little to no chemical alteration. Synthetic pesticides are products that are produced from chemical alteration. All pesticides are toxic (-cide means to kill) – and the dose makes the poison. In fact, some naturally procured pesticides are deadlier or carry a higher risk than synthetic options. “Just because something is labeled organic or natural does not mean it is safer to the homeowner or unable to cause harm to the environment. Botanically derived pesticides are not always safer; in fact, some can be more dangerous.”

Lots of things in nature are toxic, so let’s cease and desist with the appeal to nature fallacy. Everything is chemicals and the source of a chemical does not dictate its safety. More than that, many have the potential to be more harmful to key pollinator species that we rely on, humans, and other animals.

Examples of organic pesticides include: Nicotine sulfate, Methyl bromide, Copper sulfate, Sodium hypochlorite, Gibberellic acid, Chlorine dioxide, Peracetic acid, Sodium carbonate peroxyhydrate, Lime sulfur, Azadirachtin, Spinosad, Calcium hypochlorite, Veratran D, Lignin sulfonate, Ferric phosphate, Copper oxychloride, Hypochlorous acid, Potassium hypochlorite, Rotenone, and Pyrethrins.

A post shared by @dr.andrealove
Nicotine sulfate: Nicotine is natural, and thus approved for organic farming to control aphids, thrips, mites and other insects. It is amusing to have seen so many pro-organic campaigners arguing against the use of neonicotinoids by saying that these synthetic pesticides were using nicotine. Yes BUT so were organic farmers. But how toxic is this natural, organic-approved neurotoxin? Very (LD50: 50-60 mg/kg). In the US, nicotine sulphate carries a Danger warning. It is an organic neurotoxin that interferes with the transmitter substance between nerves and muscles. Tests have shown that nicotine sulphate has caused abnormalities in the offspring of laboratory animals and a New Jersey State study revealed that nicotine sulphate poisoning of organic gardeners can lead to increased blood pressure levels, irregular heart-rate, and, in certain cases, death.

Rotenone: occurs naturally in the seeds and stems of several plants, such as the jicama vine plant, and has been used copiously for decades. Touted as being ‘natural’, is extremely toxic at relatively low doses. Was temporarily discontinued as pesticide from 2005 to 2010 in US, but was re-approved in 2010. It is also routinely used as a piscicide in fishery waters.

Pyrethrins are derived from from chrysanthemum flowers. They act as neurotoxins in all organisms, but are particularly neurotoxic to bees and other insects, many of which are key pollinator species. They can also be neurotoxic to mammals (including humans).

Copper sulfateused as “organic” fungicide. Copper sulfate has significantly higher toxicity than synthetic alternatives. The LD50 (50% lethal dose) of copper sulfate is 300 mg/kg versus the synthetic alternative Mancozeb (4500-11,200 mg/kg) — which means that copper sulfate is at LEAST 15X more toxic, and needs to be used in LARGER quantities compared to synthetic alternatives. Not only is copper sulfate toxic to fish, humans, and other species, but it also persists in groundwater and the environment long-term.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Regulation and safety monitoring is more stringent for synthetic pesticides

The U.S. Environmental Protection Agency (U.S. EPA) regulates pesticides, and has a rigorous process that requires the product demonstrate no risk to human health if used correctly. All pesticides must go through a registration process requiring a review of data on the safety of the product which include many pesticides used in organic agriculture. These data are used to construct pesticide labels which anyone who uses them must legally follow.

However, most natural pesticides haven’t been tested for their toxic potential, as the Reduced Risk Program of the US Environmental Protection Agency (EPA) applies to synthetic pesticides only. Many natural pesticides have been found to pose potential – or serious – health risks, including those used commonly in organic farming. This is also why the EWG conveniently “omits” all of the organic pesticide residues: because they are not monitored or regulated as stringently.

Safety data for conventional pesticides include:

Evaluating if the product can cause harm to humans and under what circumstances (i.e. is it toxic to humans after inhalation, ingestion, physical contact, etc).

Evaluating dose that can cause harm at both acute and chronic exposures.

Evaluating exposure (timing and frequency) that may cause harm (i.e. how often a produce item is eaten for example would determine exposure).

Evaluating overall risk, combined information about the dose, exposure and conditions under which harm may occur.

The EPA sets tolerance levels of residues for synthetic pesticides on food (I discussed this more in depth in my piece on chlormequat). For some organically-approved pesticides, no tolerance level is set.

Organic foods are not healthier or more nutritious

There is very little scientific evidence to support any health benefits for organic products. In fact, the growing body of evidence supports the statement that a diet rich in organic products isn’t actually better for you.

2009 meta-analysis found no nutrient differences between organic and conventional foods. More recent studies came to the same conclusion. While a 2012 study found slightly higher phosphorus levels in organic produce, and a 2014 study found higher antioxidant levels and lower cadmium levels in organic food, the differences weren’t significant, and the levels don’t translate to clinical or health relevance. The 2012 study concluded there was a “[lack of] strong evidence that organic foods are significantly more nutritious than conventional foods.”

In 2012, another meta-analysis analyzed 240 studies: 17 comparing populations consuming organic and conventional diets, and 223 studies that compared either nutrient levels or the bacterial, fungal or pesticide contamination of various products (fruits, vegetables, grains, meats, milk, poultry, and eggs) grown organically and conventionally.

They report little difference in health benefits between organic and conventional foods, as well as no consistent differences in vitamin content of organic products. In fact, only one nutrient (phosphorous) was significantly higher in organic versus conventionally grown produce. Protein and fat content were also similar, and no differences reported (some of which is due to methodological disparities) were clinically relevant.

Trace levels of pesticides found in urine are frequently used as “evidence”, but have no biological relevance

Finding micro-trace levels of any chemical in urine is meaningless in and of itself. More than 3,000 chemicals can be detected in human urine; almost none poses any harm. Trace chemicals in urine are the residue of the kidneys doing its filtering job. EWG routinely assesses ‘urine levels’ of chemicals, either because they do not understand basic chemistry, or they are deliberately exploiting the widespread misunderstanding people have about how chemicals are processed by our bodies in order to spread fear.

study claimed that people who switched to organic foods primarily had a decrease in urine output of pesticides: but they only looked at pesticides used in conventional farming, not organic pesticides. Of course it stands to reason you’re not going to detect things you’re not testing for! Another study making claims about glyphosate in urine, that the median glyphosate levels in organic food consumers and individuals with known exposure are essentially identical (390 vs 400 parts per trillion, respectively).

Organic farming is not better for ecology and wildlife

While organic farming is perceived as more environmentally friendly, the data don’t support the claim that organic farming reduces environmental impact.

Organic pesticides are not better for biodiversity.

Some organic pesticides actually have worse ecological impact than conventional ones. Also, organic farming prohibits GE crops which can actually reduce the amount of pesticides needed for effective pest control. Because organic pesticides are not permitted to be altered to improve specificity or biodegradability, many organic pesticides are less effective, can bioaccumulate more, and have worse ecological impact by killing non-target species, including beneficial insects and soil microorganisms, many of which can be natural predators of the target pest in question.

For example, organic pesticides used for aphids can kill multicolored Asian lady beetles and insidious flower bugs, both of which are natural predators of aphids. Many require much higher concentrations to be applied to have similar impacts as conventional pesticides. In addition, because organic farming prohibits GE crops, pesticides need to be applied where in conventional farming, a crop could be naturally resistant to a pest. A GM blight-resistant potato grown conventionally would not need fungicides like copper sulfate applied in order to control blight in organic farming.

Organic food has a larger impact on climate because of the greater area of land required to farm it

Organic farming requires more land use for equivalent yield

On average, organic farming uses 84% more land for similar yield. Organic agriculture yields lower crop productivity due to poorer nutrient availability and less effective weed and pest control. Organic produce generates higher nitrogen leaching, nitrous oxide emissions, ammonia emissions and has more acidification potential per unit of product. Nowadays, organic agriculture is now done mostly by big corporations instead of local producers, so combining lower yields with intensive machinery use means that overall, in terms of emissions and pollution, organic agriculture is usually worse than conventional for the environment.

Since you need more land for similar yield, you have higher rates of habitat and land conversion. This means that in order to meet food demand, you have higher rates of local and global deforestation, encroachment on marginal lands or sensitive habitats, which can further harm wildlife and ecology. Organic farms may actually accelerate habitat loss, deforestation, and biodiversity decline.

Organic farming results in higher greenhouse gas emissions per unit of output

Organic practices rely heavily on organic amendments, such as compost and manure, which release methane and nitrous oxide—a potent greenhouse gas—during decomposition. Organic farming may necessitate more frequent soil tillage, contributing to soil carbon loss and exacerbating climate change. That’s especially the case when comparing to GE conventional crops, which often don’t require tilling and can minimize soil runoff and nutrient retention. A meta-analysis inclusive of 71 peer-reviewed studies demonstrated that organic products are worse for the environment. Organic milk, cereals, and pork generated higher greenhouse gas emissions per product than conventional counterparts. A 2018 Nature study confirmed that organic farming leads to higher emissions than conventional farming.

This study found that organic peas resulted in a 50% increase in climate impact due to lower yields. To produce the same amount of organic food, you therefore need a much bigger area of land. The greater land-use in organic farming leads indirectly to higher carbon dioxide emissions, thanks to deforestation.

Organic farming generally consumes more water per unit of yield compared to conventional agriculture

Organic practices rely on natural irrigation methods, such as rainwater harvesting and drip irrigation, which may be less efficient than conventional irrigation systems. Moreover, organic farms often require more intensive manual labor for weed control and crop management, leading to higher water demand for irrigation.

Demonizing affordable and nutritious conventional produce is harmful

The false dichotomy between conventional and organic isn’t just misleading, it’s dangerous. Our constant attention on natural versus synthetic only causes fear and distrust, when in actuality, our food has never been safer.

Eating fewer fruits and vegetables due to fear of pesticides or the high price of organic food does far more harm to our healthConventional produce has the same nutritional content and is as safe to consume as ‘organic’ produce. Most of Americans already don’t eat enough fruits and vegetables, and produce contains important nutrients, fiber, and other substances that are extremely important to our health.

Are we facing an ‘Insect Apocalypse’ caused by ‘intensive, industrial’ farming and agricultural chemicals? The media say yes; Science says ‘no’

dead bee desolate city
The media call it the “Insect Apocalypse”. In the past few years, the phrase has become an accepted truth of the journalism literati, and usually associated with such apocalyptic terms as “ecosystem collapse” and “food crisis”. The culprit: modern agriculture, which is often linked to the Brave Not-So-New World of GMOs and gene-edited crops and the chemicals purportedly used to support it.

An opinion writer for the New York Times, Margaret Renkl, has warned of the dark ages about to be ushered in by pesticides. She makes a case for preserving “weedy” backyards filled with blood-sucking mosquitoes and other human-threatening flying and crawling creatures of various species.

The global insect die-off is so precipitous that, if the trend continues, there will be no insects left a hundred years from now. That’s a problem for more than the bugs themselves: Insects are responsible for pollinating roughly 75 percent of all flowering plants, including one-third of the human world’s food supply.

Insect Armageddon, another popular phrase, is now one of the most common tropes in science journalism. As I’ve chronicled numerous times in recent years, (including here, here and here), many journalists have echoed claims by environmental activists advancing a succession of insect- and animal-related environmental apocalypse scenarios over the last decade—first honeybees, then wild bees and more recently birds. In each case they fingered modern, intensive farming, particularly crop biotechnology and pesticides, as the culprit, and warned of the terrible consequences in store for the Earth, including the mass extinction of pollinators and the global famine that would surely follow. In each case, small or poorly executed studies predicting imminent catastrophes were ballyhooed by many in the media; in each case, as more research came to light, the hyped claims were eventually retracted or dramatically readjusted.

maxresdefault

More recently the spotlight has been turned on insects, the result of a handful of studies that vaulted the issue into global prominence. Is this claim, the plight of insects, an example in which the journalists got it right? We should all be frightened…if there are even a few ounces of truth to the common wisdom presented in the Times’ essay.

Fortunately for planet Earth, Ms. Renkl and the New York Times, again, got it very wrong.

You may not have noticed, as the mainstream news mostly ignored the report, but we finally have comprehensive, competent, non-ideological research to help us assess what up till now has mostly been speculative scenarios and agenda-inspired hyperbole disguised as research. A study by German researchers published in Science in April is now widely accepted—among experts—as the largest and most definitive study to date on the “Insect Apocalypse” scenario.

Researchers at the German Centre for Integrative Biodiversity Research, Leipzig University and Martin Luther University led by first author Dr. Roel van Klink analyzed almost a century’s worth of data from 166 long-term insect surveys in various parts of the world. While the far-reaching study has certain limitations (which I will address farther on), it needs to be reckoned with by anyone seriously concerned about the ecological future. A short list of the topline findings:

  • Overall, terrestrial insects are declining much less rapidly (3 to 6 fold less) than other recent high-profile studies had suggested, and even this likely overstates the trend. Freshwater insect populations are actually increasing.
  • “Crop cover,” which means things like corn, soybeans, sorghum, cotton, spring and winter wheat, alfalfa and hay, is associated with increases in insect populations.
  • There is no association between insect population trends and global warming.
  • The only clear association with insect declines is with urbanization, likely caused by habitat destruction, light pollution and waste pollution.

I will unpack each of these findings in a moment, but to understand why they’re so explosive it’s important to briefly review the origins of the various ‘sky is falling’ narratives and why journalists, and even some scientists, consistently get it wrong.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

The Bee Apocalypse: The origin story

cover
Credit: TIME

The narrative kicked off in the mid-2000s with reports of large-scale die offs of honeybees and the once-in-a-generation eruption of Colony Collapse Disease (CCD), concentrated mostly in California, that saw adult honeybees mysteriously disappear from hives. This was all very scary, and still eludes a full explanation other than that similar incidents have been documented over hundreds of years.

But the apocalypse narrative ran up against abundant evidence of rising or stable populations of honeybees over the last 25 years, except for a slight dip due to CCD in 2006-7. Honeybees are basically livestock, and governments around the world keep close tabs on the number of hives in each country. Those numbers have been rising on every habitable continent in the world since the mid-1990s, and globally have reached record numbers. After hyping a catastrophe for years in fundraising Buzz Kill scare-a-grams, even the Sierra Club finally admitted in 2018 that

…honeybees are at no risk of dying off.  … The total number of managed honeybees worldwide has risen 45% over the last half century.

Next came claims of a wild bee catastrophe. There are thousands of known species and thousands more we don’t know about. Most are solitary, meaning they don’t form hives. They tend to be very small and they often live in holes in the ground. In sum, they’re hard to count. That didn’t stop a series of dire predictions, including from the Sierra Club as it shamelessly Gish galloped from honeybees to wild bees. The problem for the activists was that the very nature of wild bees means there is almost no data to support their claims. And the species that pollinate crops, and so come into most frequent contact with pesticides, are thriving.

The bird apocalypse never really took flight. After some questionable studies, it soon became clear that earlier bird declines leveled off and even reversed unnamedin the 1990s. Besides, the real killer of birds is cats, both feral and domestic, that are estimated to slaughter between 1.3 billion and 4 billion birds annually in the United States alone. Proposing a ban on cats, however, is bound to be unpopular. It wasn’t until the insect apocalypse that doomsayers found a way to get around the problem that the data did not support various ‘imminent collapse’ predictions.

If counting wild bees is well-nigh impossible, getting anything like an accurate gauge on insects is orders of magnitude harder. Estimates of the number of insect species range between 2 million and 30 million. Even in the US, which along with Europe is one of the most studied regions in the world, we’ve only named a little more than half the species thought to exist. Under these circumstances, studies could advance the most extravagant claims—or better yet, make scary predictions—with little fear of being tripped up again by facts. There were numerous studies of insects, of course, but very little systematic work on overall trends.

Endangered insects

The insect decline story first got traction in 2017 with the publication of a study by Hallman and Goulson that purported to find that flying insects had declined 76% over 26 years in certain nature parks in Germany. The Guardian, among those first on the Armageddon bandwagon for honeybees and wild bees, was the loudest in trumpeting the news: “Warning of ‘ecological Armageddon’ after dramatic plunge in insect numbers,” it headlined, explaining in the subhead that this could have “serious implications for all life on earth.”

unnamed

The story was misreported.

The study’s methodology was seriously flawed. In many cases, the researchers didn’t sample the same sites in subsequent years, making the supposed trends meaningless. They also used the wrong sampling methods, large tent-like structures known as “malaise traps,” to capture the insects. As Oxford Zoologists Clive Hambler and Peter Henderson have pointed out in relation to other studies, malaise traps only capture flying insects when they are flying, and whether this happens is highly dependent on other variables, especially weather and climate.

px malaise trap
Malaise trap used for sampling. Credit: Ceuthophilus/Wikimedia

Goulson, a controversial researcher, used this flawed data as a springboard to launch an attack on conventional agriculture, particularly pesticides. Goulson is a familiar figure to anyone who has followed the debate over neonicotinoid pesticides and bees, notorious for the stridency of his anti-pesticide campaigning and willingness to produce made-to-order, science-for hire research for activist groups. Despite the fact that the samples were taken in nature reserves, the purported decline was clearly due to modern farming practices, he explained to the Guardian.

Goulson said a likely explanation could be that the flying insects perish when they leave the nature reserves. “Farmland has very little to offer for any wild creature,” he said. “But exactly what is causing their death is open to debate. It could be simply that there is no food for them, or it could be, more specifically, exposure to chemical pesticides, or a combination of the two.”

unnamed file
Dave Goulson. Credit: David Levene/The Guardian

It’s important to point out that this was merely conjecture on Goulson’s part. The study wasn’t designed in any way to determine the cause of insect declines, if in fact they were happening. And there was no data, flawed or not, to support the claims he made with such assurance.

History repeats itself—as farce

The insect crisis study by Casper Hallman and Dave Goulson made major waves. According to the website Almetric, it was the sixth-most-discussed scientific paper of 2017, and an inspiration for many thousands of media stories and blog posts. But the crisis claims, sketchy as they were, were just foreplay for a 2018 study that again shook the journalistic rafters.

Francisco Sanchez-Bayo, like Goulson known for his anti-pesticide activism, produced a meta-analysis of global insect population trends that had even the Guardian searching for catastrophic verbs. Clearly in need of a thesaurus to find a synonym for “plunge” featured in the Goulson-Hallman report, the paper headlined their story: “Plummeting insect numbers ‘threaten collapse of nature.’

Other news organizations didn’t share the Guardian’s enthusiasm, as I detailed in an article for the Genetic Literacy Project. One issue was the lack of geographic representation for almost every part of the world except Europe and North America (a problem shared by the recent German study just out in Science.). But the key blunder reeked of ideological manipulation: the authors eliminated any studies finding stability or increases in insect populations by limiting their search to papers with “decline” in the title. Surprise: The analysis found declines!

Sánchez-Bayo made clear that his goal was far more than studying insects; it was to make the world safe for organic farming: “The world must change the way it produces food,” he told the Guardian. “Industrial-scale, intensive agriculture is the one that is killing the ecosystems.” He laid particular blame on a class of insecticides known as neonicotinoids, under fire by some environmentalists who claim they “sterilize the soil.”

lite trac crop sprayer

The meta-analysis, however, did not focus on farming. A small number of the studies did have some relation to agriculture. Inconveniently for Sánchez-Bayo, those studies didn’t support his thesis. As I detail in my analysis, he confused one study’s speculation that pesticides might be the cause of bumblebee declines with the actual findings of the study, which didn’t examine causes; and in another case he claimed that a study on bats (which eat insects) found they were less abundant on conventional farms, when in fact the study had found the opposite.

Even the BBC was skeptical of how he massaged the data, which didn’t faze the author:

BBC:  We put these criticisms to Dr. Francisco Sánchez-Bayo. But he says that even if they don’t have the data to prove that claim statistically, that doesn’t mean they shouldn’t make the claim.

Dr. Francisco:  So therefore, even if we don’t have enough data to prove it statistical or whatever, we know that this is happening. So, it’s better to do it now, than not 10 years later when we may have a more serious problem. Yeah. We think the world’s insects could be wiped out in a century from now.

Yeah, statistical, whatever.

In early 2019, another German group of researchers, headed by Sebastian Seibold in Munich, attempted to rectify some of the problems with Hallman’s research, but as Hambler and Henderson pointed out in a critique the study, “The evidence for a recent decline in arthropod abundance in Germany is not yet robust.” The short number of years sampled vitiated any reliable extrapolation to more meaningful trends. The study authors did not account for changes in weather or climate trends.

And the way they collected data on insect densities—using sweep netting for grasslands and flight interceptors for forests—rendered the conclusions suspect at best. Both methods measure insect activity not population size. In the case of sweep netting, which is dragged across the tops of vegetation, the results can be highly variable depending upon the height and density of the plant growth, as arthropods will naturally seek cover closer to the ground. In other words, land with greater plant richness, variety and more natural growth could very well produce sweep netting samples with fewer insects and less variety of insects.

“Overgeneralisation from limited sampling could lead to inappropriate policy responses,” Hambler and Henderson concluded.

But pushback by scientists and more diligent journalists did not make a dent in the popular myths spread so recklessly by the media. Catastrophic insect declines linked to “industrial agriculture and “wanton pesticide use” were now “facts”. By dint of sheer repetition, a new “science consensus” had been born. The campaign against modern farming and the iconization of organic agriculture were the new narrative norms.

[Editor’s note: This is part one of a two-part series on the “Insect Apocalypse”. Read part two here: Disaster interrupted: Which farming system better preserves insect populations: Organic or conventional?]

Jon Entine is the Executive Director of the Genetic Literacy Project and a life-long journalist with 20 major journalism awards. Follow him on X @JonEntine

How taking LSD to relieve anxiety disorder can work for months after the chemicals leave your brain

lsd universal x
Psychotropic drugs are all the rage now as a potential treatment of brain diseases. Examples are ketamine and psilocybin for depression and PTSD. More recently, a single dose of LSD was found to have a long-lasting effect on generalized anxiety disorder. Some simple chemistry may explain how LSD can persist in the brain long after it is gone from the blood.

A new press release from MindMed, a biopharmaceutical company geared toward developing drugs to treat brain health disorders, is intriguing. The company has been running clinical trials on LSD as a potential therapy for generalized anxiety disorder (GAD). It seems to work:

I’ve conducted clinical research studies in psychiatry for over two decades and have seen studies of many drugs under development for the treatment of anxiety. That MM120 [LSD 100 micrograms] exhibited rapid and robust efficacy, solidly sustained for 12 weeks after a single dose, is truly remarkable…These results suggest the potential MM120 has in the treatment of anxiety, and those of us who struggle every day to alleviate anxiety in our patients look forward to seeing results from future Phase 3 trials.

— David Feifel, MD, PhD, an investigator in the MM120 study.

The FDA agrees. Based on Phase IIb results the agency designated MM120 (a 100 microgram dose of LSD tartrate) as a breakthrough therapy for GAD.

Rather than rehash the limited clinical results I thought it might interesting to examine why a single dose of LSD can provide such a long-lasting effect. LSD stays in the brain long after it has disappeared from the blood – an unusual property for drugs. How is this possible? At least part (if not most) of this effect can be explained by some rather simple organic chemistry. LSD “sticks” in the brain for a very specific reason. It’s just like oil and water.

A group from the Department of Pharmacology at the University of North Carolina may have found out why –  an unusual mode of binding to a specific receptor in the brain in which the drug traps itself. And, it does this because of some rather fundamental chemistry.

In a 2017 paper published in the Journal Cell, Bryan Roth, M.D, Ph. D., and colleagues provided details of how LSD binds to specific serotonin receptors (1). The details of this binding can be visualized at the atomic level by using a technique called X-ray crystallography, a powerful tool for examining the chemical structures of molecules and how they interact with targets. More on this later.

But first, let’s clean up the name because, although LSD is called “acid,” it really shouldn’t be, at least according to chemists. In fact, not only is there nothing acidic about LSD but it’s actually basic. (Anyone know why?) But its precursor, lysergic acid, could rightfully be called “acid” even though it is mostly pharmacologically inert.

screenshot at pm
Figure 1. The structures and names of lysergic acid and LSD. The two molecules are identical except for the atoms in the blue and red circles.

Lysergic acid, a natural product which is produced by ergot fungus, has, as the name implies, a functional group called a carboxylic acid (blue circle), hence the name. But lysergic acid is not LSD. LSD is not a natural chemical (3); instead is synthesized in a lab from lysergic acid. The correct name for LSD is LySergic acid Diethylamide. But the term “dropping amide” just doesn’t sound right, so users took some psychedelic license.

How to visualize molecules and receptors — X-ray crystallography

X-ray crystallography (4)is a science unto itself. It is highly specialized and technical. An organic chemist may be able to understand and use the images that are generated by powerful computers that assemble and process huge amounts of data. But we never really understand the guts of the process, let alone have the knowledge to perform it. This also makes it rather difficult to write about. Here goes nothing.

Roth’s group enabled us to see exactly why LSD behaves as it does and it is very cool. The expression “keep a lid on it” is certainly germane here. The image below (Figure 2) shows a simulation of how a very small, but also very critical part of the 5-HT2A  serotonin receptor – the target of LSD works (5). The light blue represents a portion of the receptor – the binding site.  The dark blue area is an unusual structure called a lid. As the name implies, the lid can either let something out or keep it in. In the closed form (left) nothing is getting in or out of there. In the open form (right) you can see a molecule (red) that is looking rather eager to escape. LSD keeps the lid closed, which is why it stays in your brain longer than you might expect.]

screenshot at pm
Figure 2. The serotonin receptor that binds LSD. (Left) Open form. The yellow oval represents the space in the open form that serotonin agonists or blockers can enter and exit. (Right) Closed form. The same receptor with the lid being held closed by LSD, trapping the drug inside the receptor. Source: Cell

If this isn’t cool enough, let’s take a closer look, all the way down to the atomic level. Figure 3 shows the precise interaction between the diethylamide portion of LSD and a single amino acid part of the receptor, the amino acid leucine (2).  LSD “finds” its serotonin receptor and attaches to its binding site in such a way that the diethylamide from LSD and leucine from the protein are in close proximity. The interplay between the diethylamide part of LSD (pink) and the leucine fragment of the lid protein (red) is shown by the green arrow. They attract each other.

screenshot at pm
Figure 3. (Left) The interaction (green block arrow) between the diethylamide part of LSD (pink) and a leucine component of the receptor (red). (Right) A closeup of the same interaction. L209 refers to the 209th amino acid in the protein chain, which consists of amino acids chemically bound to each other. “L” is shorthand for leucine. Source: Cell

Why should the diethylamide and leucine attract each other? It’s Chemistry 101 – like attracts like. Figure 4 shows the chemical structure of both fragments and why they “like” each other. Amino acids can be broadly categorized as “hydrophobic” aka lipophilic – hates water but likes oil and “hydrophilic” (likes water but doesn’t like oil). “When the amino acid side chain contains only carbon atoms in its group, it is considered hydrophobic. Leucine (blue circle) is one of these. The four carbon atoms in the diethylamide part of LSD are also hydrophobic. Like attracts like, so these two groups attract each other. It is this attraction that holds the lid closed. Conceptually this is no different than gasoline or oil and water. This is also the exact same reason why red blood cells become distorted and function poorly in people with Sickle Cell Anemia, something I recently wrote about.

screenshot at pm
Figure 4. (Top) Leucine contains four hydrophobic carbon atoms – they prefer oil to water. (Bottom) The diethylamide fragment of LSD also contains four hydrophobic carbon atoms. This is why they “stick” together.

It is well-known by chemists that the properties of a drug can vary greatly depending on only a handful of atoms. It would be difficult to come up with a better example.

Notes:

(1) There are 14 subtypes of serotonin receptors, found in different locations throughout the body, not just the brain, and they produce a wide array of responses. For example, Zofran, a drug invented to control chemotherapy-induced nausea and vomiting acts by blocking (an antagonist) the type 3 receptor (5-HT3) while Prozac acts as an agonist for the 5HT2C receptor.

(2) Proteins are long chains of amino acids chemically bonded together. The properties of the protein depend greatly upon the characteristics of the amino acids. There are 20 different amino acids that make up proteins, and they have very different properties. These different properties are responsible for the enormous number and variety of proteins.

(3) The classification of chemicals as “natural” vs. “synthetic” is a distinction without a difference. As we at ACSH have maintained forever, it makes no difference where a chemical comes from.  The structure of the chemical and its properties determine its pharmacological profile, not its origin.

(4) ChatGPT: “X-ray crystallography is a technique used to determine the atomic and molecular structure of crystalline materials. It involves exposing a crystal to a beam of X-rays, which are scattered by the electrons in the crystal lattice. By analyzing the pattern of scattered X-rays, scientists can determine the arrangement of atoms in the crystal. This information is crucial for understanding the properties and behavior of materials at the atomic level. X-ray crystallography has wide-ranging applications in chemistry, biology, physics, and materials science, and has played a key role in advancing our understanding of the natural world.”

(5) 5-HT is short for 5-hydroxytryptamine – another name for serotonin.

Sources:

Original Paper: Cell, Volume 168, Issue 3, p377–389.e12, 26 January 2017;  https://doi.org/10.1016/j.cell.2016.12.033

Dr. Josh Bloom is Executive Vice President of the American Council on Science and Health. He has published more than 60 op-eds in numerous periodicals, including The Wall Street Journal, Forbes, and New Scientist. Follow him on X @JoshBloomACSH

A version of this article was originally posted at the American Council on Science and Health and is reposted here with permission. The American Council on Science and Health can be found on X @ACSHorg

Nature’s lost scents: Perfumes made to replicate extinct plants provide an olfactory glimpse into the past

history of perfume and cologne

Enchant your loved ones with nature’s lost scents, revived through biotechnology and perfume artistry.

When that popped up on Facebook, I was intrigued. So I clicked.

“Meet Invisible Woods: a clean, refreshing scent revived from extinct flower DNA,” beneath an image of “origin flower” Wendlandia angustifolia.

A quick search revealed that this plant had been presumed extinct, until one popped up in a 1998 survey of its natural habitat in Tamil Nadu, India. Invisible Woods is not really “revived,” but “reimagined,” using clues from ancient flowers and the tools of biotechnology.

wendlandia angustifolia madras courier
Wendlandia angustifolia

Future Society offers six fragrances inspired by past plants, for $98 per 50 milliliters (a little under 2 ounces) or a $35 sampler ideal for a stocking stuffer. Boston-based Ginkgo Bioworks provides the expertise in genetics.

I don’t use scented products other than Pine-Sol, so this was all new to me. DermNet defines “fragrance” as a combination of organic compounds that produces a distinct smell, whereas a perfume is a liquid mixture that emits a pleasant odor, and oilier than a fragrance. I don’t exactly get the distinction, but apparently perfume is the oilier of the two and perfume, cologne, and aftershave are all fragrances.

Before I dig into the science, I’ll relate taking a quiz on the Future Society website that would help me choose a product. I clicked on the “friend” option, my bestie, Wendy.

x
Credit: Future Society

Next, the screen asked me to describe Wendy. I chose “confident,” ahead of creative, strong, and calm. Since she and I like to hike, I next selected her preference for “woodsy” over floral, sweet, or spicy, although spicy was a close second. Next question, easy peasy – her favorite season is summer. Finally, the website asked me to imagine Wendy’s fragrance, choosing among “enchanted garden,” “mystical forest,” “tropical paradise,” and “Zen retreat.” Definitely Zen retreat, because Wendy meditates.

The website returned Solar Canopy eau de parfum. Magically deducing that Wendy likes to take trips, her scent Solar Canopy is “For the joyful traveler: a sunny scent transporting them to their next beachside destination. Top: Bergamot, Red Currant, Pink Pepper; Mid: Turkish Rose, Lychee, Pistachio, Magnolia; Base: Vetiver, Ambroxan, Pink Sugar. We both love pistachio, so I’m sold.

On to the science.

Bringing back extinct or threatened species – or just their molecules

“What if we could grow everything? Biology can,” relates Future Society ads and the opening screen of Ginkgo Bioworks’ webpage.

To recreate a scent from the past, Ginkgo scientists use recombinant DNA technology to:

  • identify a gene with an intriguing product, such as an enzyme required to produce a molecule that a human nose interprets as a fragrance. To the plant, it attracts insect pollinators.
  • compare the DNA sequence to similar genes in other species.
  • tweak a lab-made copy of the DNA sequence using clues from other plant species.
  • introduce the synthetic gene into single cells to scale up production of the enzyme. Yeast and bacteria are single-celled organisms useful in biotech.
  • use mass spectrometry to identify and confirm that the fragrance molecule is what was sought.

Presumably at some point a functioning human nose is required, too. Not all recipes turn out as expected.

Resurrecting floral scents seems more useful than other tweaks to ancient DNA, a field that dates to 1984. Why defrost mammoths to refashion the great beasts in the uteri of modern elephants? Why bring back the heath hen, an extinct creature I’ve seen behind glass at the Martha’s Vineyard museum? Does the world really need more chickens?

But ancient DNA can be used, perhaps, to slow extinction. Revive and Restore is a wildlife conservation organization striving “to enhance biodiversity through the genetic rescue of endangered and extinct species.”

Their roster includes the aforementioned heath hen and woolly mammoth, as well as the black-footed ferret, passenger pigeon, sea stars succumbing to “wasting disease,” rodent-like Binturongs being illegally traded as pets, inbred sea otters, quaggas, the Przewalski horse, climate-stressed Joshua trees, and kelp forests and their resident blue mussels, sea dragons, and other creatures.

Revive and Restore also lists the horseshoe crab, whose numbers are dwindling because a chemical in their blue blood has been used for many years to detect bacterial contamination of medical products. This news apparently hasn’t reached the shores of Martha’s Vineyard, where every summer I watch the armored animals mate in shallow water.

Perusing patents

Once I’d clicked on the initial Future Society ad on my feed, I was of course bombarded. I reveled in the colorful, creative names and descriptors: “reclaimed fame,” “invisible woods,” “grassland opera.” I could imagine myself, inhaling rapturously, in a Permian forest some 260 million or so years ago.

The branding folks seem to have had a good time evoking those images of ages past, deeming the recreation of ancient molecules “scent surrections.” But much of the verbiage is touchy-feely murky:

Future Society is a vision of a more reciprocal world where we can change outcomes that impact our collective good by acting as our individual best selves.

Huh? I know what recombinant DNA means, but not that. As a biologist, all I could think was that when an organism becomes extinct, there’s a reason. It no longer fits in with a changing world, be that climate change, a novel infectious disease, mutation, a comet crash. or competition from an introduced species.

Still, I wanted to follow up. But instead of reading more ad copy evoking memories of Dorothy growing faint in the field of poppies, I searched Ginkgo Bioworks’ patents. What was the company behind the brave new fragrances up to?

They “design custom organisms that bring new products to life for countless applications.” The 46 granted patents begin in 2013. The technology programs single-celled organisms to crank out organic molecules of use to us, such as the enzymes required to synthesize sugars, fats, drug precursors, fine chemicals, and food ingredients like emulsifiers. (“Organic” in chemistry means carbon-containing, a definition long preceding the popular meaning.)

It’s complicated.

Some interventions tinker with promoters, the DNA sequences that start genes and can control expression of whichever gene to which they are attached. Also useful to borrow are signal sequences, which are bits of proteins that direct movement of other proteins. These strategies are used to tweak certain bacteria to produce organic molecules of interest.

One patent deals with the biosynthesis of cannabinoids and their precursors. Another harnesses chemicals called mogrosides from the gourd vine Siraitia grosvenorii, used in the manufacture of sugar substitutes. Yet another application boosts oleic acid production in yeast to create oil sprays like Pam, and to stabilize drugs.

It’s not until patent number 46, granted about a year ago, that fragrance seems to enter the picture. It covers “enzymes and/or binding polypeptides useful for protecting polymers from damage caused by fatty acids from secreted biological fluids such as sebum or sweat.” Sebum is the sticky stuff on human skin.

But where were the fragrance patents? The ones that begat the pretty 50-milliliter bottles all over Facebook? Filed, but not yet issued? Indeed, the company has a cornucopia of patent filings, grants, and published papers from just third quarter 2023. And fragrance is only part of the picture.

How did the idea of ancient scents bloom?

Ginkgo Bioworks’ co-founder Jason Kelly had the idea to “recapture the smell of an extinct flower,” according to a feature in Nature Biotechnology.

In 2016, Ginkgo’s creative director Christina Agapakis sampled flowers from three otherwise extinct plants from the herbarium at Harvard. The flowers then appeared in art installations called Resurrecting the Sublime.

Three species inspired the first reborn scents.

Hawaiian hibiscus (Hibiscadelphus wilderianus Rock) festooned ancient lava fields in Maui until cattle ranching stamped them out by 1912. Multidisciplinary artist Alexandra Daisy Ginsberg, scent researcher and artist Sissel Tolaas, and Ginkgo Bioworks brought back the ancient floral scent in 2019.

Orbexilum stipulatum aka largestipule, leather-root, or Falls-of-the-Ohio scurfpea, was last known to live in 1881 in Kentucky until a dam destroyed its habitat.

Leucadendron grandiflorum was a shrub that grew on Wynberg Hill, behind Table Mountain in Cape Town, until encroaching colonial vineyards wiped them out. The last known shrub was in a London garden in 1805.

But a complex, multicellular organism like a flowering plant is more than the sum of its genes and the molecules they control. So although scientists can recapitulate, recombine, and refine the molecules that provided a fragrance to an extinct flower, they can’t recreate the context in nature.

What did the fragrance do to enhance survival or reproduction in the ancient plant? Or was it a by-product of some other function? Did several odoriferous molecules interact? Was a fragrance vestigial? Hibiscus flowers, for example, don’t naturally emit a scent, because birds pollinate them.

In a broader sense, it’s comforting to realize, in this time of accelerated climate change, that geneticists of the future will be able to extract and recapitulate today’s biochemistry, using genetic instructions.

Ricki Lewis has a PhD in genetics and is the author of the textbook Human Genetics: Concepts and Applications, soon to be published in its fourteenth edition. Follow her at her website www.rickilewis.com or X @rickilewis

A version of this article was originally published at PLOS Blogs and has been republished here with permission. PLOS can be found on X @PLOS

Viewpoint: Here’s why the EPA needs to relax regulations that make it harder for farmers to access pest-resistant biotech crops

Plant pests and disease have a massive global impact, causing the loss of 20–40% of crop production and costing over USD 220 billion. These losses can threaten food security, contribute to climate change, and create financial burdens on farmers.

For example, citrus greening disease, first detected in Florida orchards in 2005, caused over $1 billion in annual losses by 2008. The disease has now spread to most citrus-producing states in the US, where it kills trees within 3 to 5 years and still has no effective cure. Recently, several potential genetic solutions for citrus greening disease have emerged, but new regulations could make it harder for them to reach the market.

Citrus greening disease, and the regulatory hurdles facing the industry in curbing it, is a microcosm of similar issues throughout US agriculture.

The use of CRISPR — a relatively new tool that can make small, precise changes to an organism’s DNA — allows scientists and plant breeders to respond quickly to constantly evolving agricultural pests by creating a wider variety of disease resistant crops. The problem is that EPA regulation of CRISPR-edited crops may be too burdensome for most to reach the market, depriving farmers of important tools to protect their crops’ health.

In 2023, the US Environmental Protection Agency (EPA) passed controversial new regulations making it harder for farmers to get new crops that resist disease and help safeguard agricultural production. The new rule changes regulation of disease- and pest-resistant crops that EPA calls plant-incorporated protectants (PIPs). EPA regulates PIPs including crops with biotech traits that do things like create a toxin that kills pests, or strengthen the plant’s immune system for fighting disease; however EPA does not regulate any PIPs with traits created using only conventional breeding.

EPA overregulation of PIPs decreases innovation of new pest- and disease- resistance and plant regulator traits, and hurts the ability of US agriculture to continue growing crop yields. Without these innovations, farmers are left with fewer tools to prevent production loss, especially those growing specialty crops like fruits, vegetables, and nuts that are already more difficult to innovate. Improvements in crop genetics have contributed roughly half of historical yield gains, and biotechnology is an increasingly important tool. Continuing to increase crop yield growth can help decrease food prices, limit greenhouse gas emissions from food waste, and reduce deforestation.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Regulatory overreach

Historically, EPA has applied PIP oversight to a narrow range of traits, but the new rule has a dramatically wider scope. EPA has registered over 100 PIPs so far, with the majority being genetically modified insect-resistant Bt crops — mostly corn, cotton, and soy. Bt crops have been cultivated in the US for decades and have improved pest control, increasing crop yields and reducing pesticide use. Outside of Bt crops, EPA has registered PIPs including RNAi for rootworm control in maize; viral coat proteins for disease resistance in papaya and plum; defensin proteins to fight the bacteria that cause citrus greening disease; and a gene for resistance to the fungal-like pathogen that causes potato late blight.

In addition to pest- and disease-resistance traits, EPA’s new rule could also make it harder for plant regulators to reach the market. Plant regulators can encompass a broad range of crops with changes in traits like plant height or flowering time, which are not primarily pest- or disease-resistance traits and require different regulation.

Such a wide-ranging regulatory scope runs counter to previous EPA regulatory practices for PIPs. In 2001, when EPA exempted conventionally-bred PIPs, the agency effectively stopped regulating most disease resistance and plant regulator PIPs without genes from other species because they were all made using conventional breeding. The new rule, however, provides no exemptions for disease resistance or plant regulator traits if they are made using gene editing, meaning EPA could apply full regulatory oversight. This amounts to a large category of products that are often very unlikely to have negative effects on non-target organisms. For example, many plant regulator traits are crucial for increasing crop yields. Pushback on the new rule from industry includes the concern that traits such as short stature in wheat plants could be regulated by EPA as a PIP if they were created using gene editing, even though the same trait created using conventional breeding would be exempt.

The two new categories of exemptions in the rule aim to capture PIPs that EPA considers low risk, particularly those that could have been created using conventional breeding. However, these categories — “loss-of-function PIPs” and “PIPs created through genetic engineering from a sexually compatible plant” — neglect to include many low-risk disease resistance traits that should not be subject to EPA PIP oversight. To make matters worse, USDA also determines exemptions by what could have been achieved using conventional breeding, but the two agencies use different definitions.

In addition, the submission requirements for some PIPs that the agency deems low risk are far too extensive. In order to get EPA confirmation of exemption for “PIPs created through genetic engineering from a sexually compatible plant,” the applicant must submit information on the biology of the plant, pesticidal trait, molecular characterization, and history of safe use. EPA reviews the application, and notifies the applicant of the product’s regulatory status within 90 days of submission. The component of molecular characterization requiring nucleic acid sequence comparison in particular could be more difficult for PIPs in specialty crops where genomes are less thoroughly sequenced.

In comparison, the submission requirements for other PIPs that the agency deems low risk are more limited. Requirements for self-determination of exemption for these “loss-of-function PIPs” are relatively simple, and could be worthwhile for transparency purposes. In order to get a self-determined exemption for “loss-of-function PIPs,” the applicant must submit information including name and contact information, identity of the recipient plant, unique identifier for the native gene from NCBI, and trait type; this information will be added to a public database of PIPs submitted to EPA. The time involved in receiving this exemption is also minimal because the electronic portal automatically responds to the applicant confirming receipt, after which the exemption is valid.

Minimal submission requirements could support transparency for stakeholders while keeping the burden of regulation low. Transparency is important to build stakeholder trust and to ease trade between countries with different regulations. Even for gene edited crops with genetic changes that “could have been made using conventional breeding,” definitions and regulations still vary across countries. Minimal submission requirements focused on plant, trait, and mechanism of action — such that re-submission is not required for slightly different genetic changes with the same result — makes agency notification of exempt products more appropriate.

There are few estimates of the cost of regulatory compliance under the new rule, but many stakeholders are concerned. The rule itself estimates a reduction in registration costs for newly exempt products from $472–886k per product, but does not cite typical registration costs for non-exempt products for comparison. A fact sheet from the American Seed Trade Association cites a biotech-specific regulatory cost of up to $3 million and 3 years per non-exempt edit, but does not cite the comparative reduction in costs for newly exempt edits. The same source cites the cost and time under the Canadian regulatory system as $0. Agricultural industry groups and researchers have raised concerns that EPA regulation of gene edited PIPs will be too expensive and time consuming, and thereby decrease innovation from small developers (like university labs and start-ups) and in specialty crops (which comprises most fruits, vegetables, nuts, and more).

A coalition of organizations — including US agricultural associations for both row and specialty crops, and industry and research associations — wrote a letter to the US House and Senate Committees on Agriculture Leadership opposing higher regulatory burdens for gene edited crops than their conventionally-bred equivalents, and requesting that Congress direct EPA to withdraw the rule. In addition, academics and plant breeders commented on the draft rule with concerns about the narrowness of exemption categories. These efforts to withdraw the rule will likely continue into 2024 as Congress continues to negotiate a spending package and a bipartisan farm bill.

How to make EPA PIP regulations more effective

The new EPA PIP rule should be changed in four ways to make regulation more proportional to risk, adaptable to future technologies, supportive of innovation — especially by small developers and in specialty crops — and a more efficient use of resources.

First, the USDA and EPA rules for biotech crop regulation ought to use the same definition of conventional breeding. Both agencies base exemptions on the type of genetic change and whether it could have been made using conventional breeding, but use different definitions. In the new PIP rule, EPA defines very narrow PIP exemptions by loss of function or added genetic material from a sexually compatible plant. In comparison, USDA’s 2020 SECURE rule has one similar exemption category — addition or recreation of a gene present in the plant’s gene pool — but also two additional categories that together are more inclusive than EPA’s loss of function exemptions. Considering USDA has had years of experience with these definitions of conventional breeding, EPA should follow USDA’s lead and change their definitions of conventional breeding in PIP regulation to match.

Second, EPA should narrow the scope of plant regulators and disease resistance traits that are subject to PIP regulation. The new rule did nothing to change the definition of plant regulator PIPs, which has been overly capacious since it was written. The definition of plant regulator includes a physiological mode of action and the intention to change the rate of growth or maturation “or otherwise altering the behavior of plants or the produce thereof”. This could include changes in traits like plant height or flowering time, which would not reasonably be considered protectants or fit within the scope of EPA’s authority to regulate pesticides.

In 1994, EPA proposed, but never finalized, a rule exempting many types of plant regulator and disease resistance traits, such as those that inhibit pests from attaching to the plant’s leaves. This would have focused oversight on PIPs that have a generalized toxic mechanism of action, which are the most likely to have undesired effects on non-target organisms. EPA must revive that proposed rule today.

Third, EPA ought to reduce the number of levels of exemptions in the rule. Currently there are four levels of exemption (one of which is full exemption) that all have different requirements, adding confusion to the process. If a category of traits is considered low risk, then it should be exempt; if not, it should be subject to full oversight.

In order to incorporate flexibility and consider different levels of risk within non-exempt products, EPA could have a two-tiered system of review similar to USDA’s: a first tier to assess any possible pathways for risk, and a second to assess the likelihood and degree of risk. Narrowing EPA regulation to just PIPs that have a generalized toxic mechanism of action would prioritize oversight of products with the most potential for risk.

It’s important to note that premarket regulations are not the only regulations that apply to PIPs, though they do inhibit innovation the most. Other post-market regulations protect farm workers, the environment, livestock, and consumers from negative impacts of pesticides, including a requirement to report any negative effects of a product to EPA — whether a conventional chemical pesticides or PIP — for the entire time the product is on the market.

Fourth, EPA should create a way to continue adapting PIP regulations in the future. The rule should leave room for both the agency itself and stakeholders to propose a broader scope of new exemptions that could be added to the rule. Currently, the final rule says that any new categories of exempt PIPs added “would be required to fall within the previously defined scope of exempt PIPs, i.e., those that can be created through conventional breeding”. This means that new exemptions could not be for categories of PIPs like those with non-toxic modes of action, which are not defined by whether the genetic change could be created using conventional breeding. Continuing to compare new genetic changes to what could be achieved through conventional breeding unnecessarily limits innovation and is a poor proxy for risk potential.

Ultimately, EPA regulation of PIPs — like all regulation of biotech crops at EPA, FDA, and USDA — should be based on the traits of the product and the risks they pose, rather than the method of genetic engineering. A wide variety of authorities acknowledge that the processes of gene editing and genetic modification do not introduce any new or unique risks compared to conventional breeding.

In EPA’s case, risk-based regulation could be accomplished by limiting PIP oversight to those that have a generalized toxic mechanism of action. This change would capture some but not all genetically modified traits, and potentially a small number of conventionally bred and gene edited traits. In addition, it would be a more effective way to focus regulatory attention on PIPs that have greater risk potential.

In contrast, the current rule’s exemptions capture a much smaller number of PIPs and maintain unnecessary regulation over many that EPA itself acknowledged in proposed 1994 regulation have very low risk potential.

The downsides of overregulation here are substantial: leaving farmers with less tools to fight pests and diseases and increase yields, thereby increasing food waste, greenhouse gas emissions, and deforestation.

Emma Kovak is a senior Food and Agriculture Analyst at Breakthrough. Find Emma on X @EmmaKovak

A version of this article was originally posted at the Breakthrough Institute and is reposted here with permission. Any reposting should credit both the GLP and original article. The Breakthrough Institute can be found on X @TheBTI

Do the MAOA and CDH13 ‘human warrior genes’ make violent criminals—and what should society do?

Screen Shot at AM
The warrior gene is back. And he’s brought along a buddy. This new research on a gene long associated with aggressive behavior raises an old question: What can–or should–be done about genetic predispositions that lead to grim social consequences in only some of the people with the predisposing genes?

The usual response, picking holes in individual research projects, denying that genes are ever involved in bad behavior, is just not good enough. We need to get serious about figuring out how to interfere with noxious genetic susceptibilities in ways that are fair and decent for everybody.

The so-called warrior gene comprises particular variations in the X chromosome gene that produces monoamine oxidase A (MAOA), an enzyme that affects the neurotransmitters dopamine, norepinephrine, and serotonin. The variants, known collectively as MAOA-L, produce human MAOA “knockouts” with a low level of the enzyme.

MAOA was the first candidate gene to be linked to antisocial behavior, identified in 1993 in a large Dutch family that was notorious for violence. It has been a media favorite ever since, acquiring the nickname “warrior gene” in 2004 as a result of an article in Science, of all places. This I learned from John Horgan’s fine rant about the exploitation of MAOA genetics at Scientific American, which describes weaknesses in the research.

The most recent appearance of MAOA-L is a paper Molecular Psychiatry published a week ago from a host of researchers based mostly in Finland. It showed that Finnish criminals convicted of several violent crimes frequently possessed either MAOA-L or a mutant version of another gene, CDH13, while the nonviolent controls did not. Find details in John Gever’s piece at MedPage Today.

CDH13 is involved in signaling between cells. Previous research has linked it with attention deficit/hyperactivity disorder (ADHD), autism, schizophrenia, substance abuse or bipolar disorder. So far as I know, this is the first time it has been associated with violent criminality. I will ignore it for the rest of this piece because I want to focus on MAOA and its long history of being connected with aggressive behavior.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Genes vs the environmental factors in violent behavior

Although it’s clear that the Finnish researchers believe their findings unequivocally, they also appear to understand the unhappy history of attempts to find genetic explanations for crime and violence. They also know perfectly well that, even if their findings turn out to be true, other factors besides low MAOA go into the making of violent criminals.

Past research has found relationships between specific environmental factors and genes linked to aggressive violence, including MAOA. A particularly strong connection has been noted among abuse in childhood, MAOA-L gene variants, and violent behavior in adulthood. A recent review declared that several studies have shown that MAOA-L men previously exposed to early life abuse engaged in significantly higher levels of violent behavior than men with high levels of MAOA. The authors assert that this is one of the best-supported “observations in the entire literature of psychiatric genetics.”

Well-supported it may be, and child abuse is certainly a plausible connection. But the Finns found no such link in their studies. They say, “maltreatment did not modify the risk in any way.” They have, however, identified another factor they think is crucial: intoxication, either with alcohol or amphetamines.

Intoxication, they say, is a feature of most of the violent crimes in Finland. They propose that intoxicants interact with MAOA-L to affect brain neurotransmitters and produce impulsive aggression. Their suggestion: when violent criminals are released from prison they should be subject to mandatory treatment with drugs like disulfram or naltrexone that interfere with the effects of intoxicants.

Child abuse and intoxicants by no means exhaust the list of possible  influences on genes and behavior. There are doubtless many others. I ran across a paper proposing a complex relationship with the “male” hormone testosterone and antisocial (and prosocial) behavior. High levels of testosterone in fetal life and childhood, the theory goes, combine with negative or positive early life events to produce either “chronic antisocial lifestyles” or men predisposed to “socially adaptive traits such as a strong achievement motivation, leadership, fair bargaining behaviors, and social assertiveness.” That sounds plausible too. Maybe the Finns should be investigating whether their MAOA-L violent criminals have high levels of testosterone too.

Dealing with the revelations of behavioral genetics

The John Horgan piece I referred to above is a rant–a productive and rational rant that will give you a brief history of what’s been misleading and outright wrong about past attempts to link genes with violence and crime. But I’m coming around to the view that ranting is no longer a satisfactory way of dealing with the discombobulating implications of behavioral genetics. We have to start figuring out how to handle them.

It’s not an adequate response to pick nits with particular papers and so by implication condemn all of behavioral genetics as a hopelessly flawed endeavor. MAOA-L is a prime example, maybe the best one–and a good place to start. The studies on low MAOA activity have piled up. Despite their individual flaws, it’s pretty clear that something really does seem to be going on with that gene variant that is (or can be) in some way related to bad behavior.

I have read that MAOA-L is pretty common–one paper says 40 percent of the population possesses it. It gave no reference, and I haven’t been able to nail that number down for sure, but let’s assume it’s true. Let’s assume that many of us are walking around with low MAOA and that we are not aggressive, don’t commit violent crimes, and are really nice people. You may be one of them. I may be one of them.

Does the fact that most people with low MAOA are not violent criminals mean there should be no attempts to identify and prevent whatever bad behavior is encouraged by MAOA-L? The researchers argue that their findings should not lead to screening for these gene variants, and I agree. But what about their proposal to prevent violent criminals from using alcohol and other intoxicants when they get out?

Applying it across the board would mean that former violent prisoners without MAOA-L would also be denied intoxication. My feeling about that is, so what? We know that alcohol and some other drugs precipitate irresponsibility and nastiness in lots of people. We already have laws that punish bad behavior associated with those drugs. The laws and social pressure even help prevent chemically induced bad behavior.

What’s wrong with applying that logic to criminals with a history of violent–often murderous–behavior? It takes the focus off genes and shifts it to well-known environmental triggers for bad behavior. These are much easier to control than genes–and would probably have more widespread social benefits.

Additional reading:

Tabitha M. Powledge is a long-time science journalist. She also writes On Science Blogs for the PLOS Blogs Network. Follow her on X @tamfecit

This article originally appeared on the GLP July 29, 2016.

‘Race’, anti-racism and biology

aytrimd

Using biology to determine the racial ancestry of human remains is racist. Except when it’s done in the name of anti-racism. 

We urge all forensic anthropologists to abolish the practice of ancestry estimation.

Biological ancestry, moreover, plays no role in susceptibility to disease. Except when it does. 

Despite social scientific perspectives that endorse a nonbiological basis to race, within biomedicine, biological uses of race remain entrenched due to their utility for identifying the causes of the disease.

Artificial intelligence (AI) systems identifying ‘race’ from medical images perpetuate systemic racism. Except when doing so ameliorates racial health disparities. 

AI models can predict the demographics of patients, including race, directly from medical images, even though no distinguishing anatomical or physiological features are evident to human clinicians.

Above all, ‘race’ has no basis in biology. Except when it has.

Such contradictory conclusions can be drawn from an increasing number of studies of ‘race’, health and disease. The mixed messaging here is not merely contradictory and confusing; it potentially harms those from racial populations already bearing the brunt of health inequities, while also hindering efforts to close these often deadly divides. 

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Scientific racism?

To understand why, let’s begin with a standard example of this kind of research — a recent peer-reviewed academic study on apparent racial differences in plague deaths in 14th-century England. The analysis, based on skeletal remains from three medieval cemeteries in London, found “a significantly higher proportion of people of estimated African affiliation in the plague burials compared to the nonplague burials”. These findings were widely reported in the mainstream media (such as, in the UK, the left-leaning Guardian and the right-leaning Telegraph), with the BBC reporting the study under the headline, “Black women most likely to die in medieval plague”.

image
Watercolor of London circa 1400 by the artist and illustrator Amédée Forestier (1854-1930). The Black Death came in 1348, killing half the population. Photograph: Museum of London

The study’s authors accept that these apparent racial differences in mortality might be an artifice of the small sample sizes (the remains of 145 individuals) used as the basis for the research. Nonetheless, their favored conclusion is that “these findings may reflect premodern structural racism’s devastating effects”. Of most relevance here, however, are two claims repeatedly emphasized throughout the researchers’ paper: 

[1] the incorrect and harmful implication that there is a biological basis of race, and … [2] the incorrect inference that there is something inherent to people assigned to a certain racial category that makes them more vulnerable to disease

Racial identity and “decentering”

An obvious initial question, if (as the authors claim) race has no biological basis, is: how was an “estimated African affiliation” of centuries-old skeletal remains ascertained? The study acknowledges that racial affiliations were ultimately based on specific “macromorphoscopic traits” (features of the skull and facial bones) that differ between populations — in this case, “five traits with known heritability: anterior nasal spine, interorbital breadth, nasal aperture width, inferior nasal aperture, and nasal bone contour”. 

Given that such traits are themselves the result of divergent biological evolution between geographically distinct ‘racial’ populations, this immediately raises another obvious question: does this not therefore reflect at least some underlying biological basis to race? 

The researchers sidestep this issue. Instead, they argue that their analysis “includes data and tools that have been rightly critiqued for their role in perpetuating systemic racism, specifically the use of macromorphoscopic traits”. The authors — “keenly aware of the twinned whiteness of both anthropology and medieval studies” — suggest that to address this suspect historical aspect of their study, “whiteness must be decentered”. They “choose to foreground contexts and complexities, as well as white supremacist genealogies in forensic anthropology and medieval studies, as a form of methodological praxis as process”. In addition, to further “detach” the racial affiliation estimates from the racist “underpinnings of forensic anthropology,” the authors utilize cranial data that is “not attached to definitions of race and ethnicity”.

Although it is unclear what much of this rhetoric means, the authors appear to implicitly adhere to the widely-held ‘skin-deep’ concept of race. This view accepts superficial physical biological differences between racial groups but vehemently rejects the possibility of ‘deeper’ variation, such as cognitive or behavioral differences and in some cases even morphological differences. Such categorization, say many sociologists and cultural anthropologists, is what they call “scientific racism”.

While understandable as a reaction to historical and ongoing (albeit less prevalent) racist attitudes, denying any meaningful biological differences between racial populations can itself exacerbate existing racial disparities. This becomes clear when examining the other claim made above: that it is inherently wrong to assume that “people assigned to a certain racial category” may be more vulnerable to disease.

Disease susceptibility

While emphasizing that “race is a social classification and is not based in biological reality,” the study’s authors also state “that variation by race in susceptibility to and hazard of dying from disease reflects the biological and psychosocial effects of racism”. This seems to suggest a belief that increased vulnerability to a specific disease is the result solely of social behavior and prejudice. 

Beyond a doubt, social factors — including racism — play a major role in health disparities between racial groups. But social environment is not solely the cause. 

Equally beyond doubt, biological ancestry plays a crucial role in the prevalence and/or virulence of certain diseases in some populations and not others — the West African origins of sickle cell disease (an unfortunate by-product of increased evolved immunity to malaria) is a classic example, as are various hereditary genetic disorders (such Tay-Sachs and Gaucher disease) in Ashkenazi Jewish populations. 

Other health disparities may be a combination of environmental factors, such as eating habits, and (biologically-mediated) genetic factors. Black Americans are 20% more likely to get colorectal cancer and die at a much higher rate, in part because, as a group, they consume greater amounts of animal fat than other ‘racial’ groups. Now researchers have identified gene mutations in cancer patients of African ancestry showing they are less likely to respond well to newer treatments.

image

The point is that simply blaming racism for all disparities in disease susceptibility and mortality may blind us to potentially crucial biological or genetic factors. 

Exacerbating this issue is the insistence that race has no biological basis beyond, at most, superficial physical traits. This directly contradicts overwhelming evidence of meaningful genetic differences between racial groups (as discussed by GLP director Jon Entine and me with racial differences in the effects of Covid 19 and HIV/Aids). Ignoring this data, (as many scientists did even two years into the pandemic), risks derailing efforts to effectively tackle diseases that disproportionately affect marginalized communities.

image

For example, ‘biobanks’ (repositories of human genetic data) are already hugely skewed towards those of European ancestry. Take the UK Biobank, a biomedical database of “genetic, lifestyle and health information and biological samples from half a million UK participants”. While this is an invaluable resource for “the prevention, diagnosis, and treatment of a wide range of serious and life-threatening illnesses,” it only reflects the genetics of the British population as a whole — that is, overwhelmingly northern European. It is much less useful, however, in predicting health outcomes of those from other racial populations, whose genetic traits may differ across thousands of small genetic variations. 

As psychologist Jonathan Anomaly points out, the lack of similar biobanks in, say, Africa or South Asia (that is, areas with poor health infrastructure and greater disease prevalence) means many of the most marginalized people in the world will be far less able than Europeans to “mitigate genetic risks through lifestyle changes and early medical interventions”. Further, Anomaly argues, that potential parents from other racial groups attempting in-vitro fertilization will also miss out on the choices increasingly available to prospective European parents. 

Research taboos on ‘race’

In explaining this mismatch among different racial groups in the collection and use of useful genetic data, Anomaly points to “taboos surrounding research into genetic differences in socially significant traits” — that is, as highlighted above, the reluctance of many Western academics to accept that biology plays a part in racial variation and in differences in health outcomes. 

Anomaly goes on to suggest, “Many social justice advocates say they want to help disadvantaged or poorly performing racial groups. But the taboos they’ve helped create in modern genetics research may end up depriving some ethnic groups of the opportunities that others will have.” (For his thoughtful analysis, Anomaly has been described as a “eugenicist” with “far right connections”.)  

The plague study examined above also neatly illustrates another of Anomaly’s claims, that “the scientific establishment in Western liberal democracies has thoroughly absorbed the central dogma holding that race is an illusion, and that racial differences cannot exist”. Evidence of this is, for example, the “ethical guidance” by the editors of Nature Human Behaviour that unequivocally states: “Race and ethnicity are sociopolitical constructs. Humans do not have biological races, at least based on modern biological criteria for the identification of geographical races or subspecies.” 

How, though, does this square with the “biological criteria’ (i.e., myriad genetic variations) that differentiate racial populations — biological differences that, moreover, have significant impacts on health outcomes?

Edging towards post-modernist nihilism

Fortunately, at least as yet, the scientific establishment has not embraced another feature of activist ‘scholarship’: a relativist approach to ‘truth’ (the idea that factual truth is relative to individuals’ cultural or social background). Consider the authors of the plague study, who claim their work aims to generate “multiple perspectives on truth … enabling a ‘pluralistic approach’ to a myriad of historical truths and methodological discoveries”. While historical events are often open to multiple interpretations, relativism extends and distorts this notion to imply that factual truth (like beauty) lies solely in the eye of the beholder — that ‘facts’ and ‘truth’ are determined by culture entirely. At an extreme, this fosters the insidious belief that science itself lacks objectivity and that so-called scientific ‘facts’ are nothing more than a reflection of Western colonialist biases.

However, by its own logic such relativism is incoherent and self-defeating; any claims made by relativist researchers are no more valid than any alternative ‘perspectives on truth’ that argue the opposite. 

Interestingly, the relativist authors of the plague study demonstrate that they do accept some absolute truths — for example, in their claim that “the truth is that race (structural racism) was invented, refined, and rehearsed in medieval  England”. The truth, it seems, is relative, except when it suits a specific ideological agenda. Consider the Sokal hoax.

The ‘post-modernist’ takeover of the social sciences began in the 1980s as a precursor to today’s social justice movement. It is marked by a general suspicion of reason and objectivity and an embrace of subjectivism and relativism. It was highlighted by the 1994 “Sokal hoax” in which a spoof academic article arguing that reality did not exist was published and lauded in a leading cultural studies journal.

screenshot am

Written deliberately as academic gibberish, his article, “Toward a Transformative Hermeneutics of Quantum Gravity, passed peer review with flying colors, and appeared in one of the notorious post-modernist journals of that era, Social Text. The article’s author, physicist Alan Sokal, later explained he wanted to draw attention to and therefore halt the “wanton abuse of science” by influential sections of the radical left. Unfortunately, in the intervening years, such “fashionable nonsense” has become more not less prevalent, and is now creeping deep into the hard science.

‘Race’, discrimination and AI

Before examining other questionable aspects of the plague study example, let’s briefly turn to an actual example of how supposedly ‘progressive’ ideological beliefs about race and racism can have a negative impact on real world scientific practice: recent advances and use of artificial intelligence (AI) technologies in medicine (technologies that, in theory at least, might do away with subjective human bias).

A slew of recent studies indicate that artificial intelligence models “are able to predict a patient’s self-reported race from their medical images” — and, moreover, can do so from images “that contain no indications of race detectable by human experts”.  

image

In itself, this seems to undermine the claim that there is no biological basis to race. It’s what we make of this information that determines whether this information is racially harmful or not. 

These reports on AI’s uncanny capacity to predict race are also replete with warnings of how this “can perpetuate racial bias in health care” and how it “mirrors unconscious thoughts, racism, and biases … [that] can lead to serious harm”. Likewise, it “raise[s] concerns about the possibility of AI systems to discriminate” and “introduces the potential for AI models to be biased and create racial disparities”. At an extreme, it could even “have catastrophic consequences by propagating deeply rooted societal biases”.

Concerns about how AI might exacerbate racial biases in medicine are not unfounded. Like any tool, artificial intelligence systems are susceptible to misuse and bias — as evidenced, for example, by algorithms that “used health costs as a proxy for health needs” only to falsely conclude “that Black patients are healthier than equally sick white patients, as less money was spent on them”. 

But automatically equating this AI’s use or capabilities with societal prejudices (including racism) is misguided. The fact that AI can distinguish race from medical images is inherently neutral; the potential for harm lies not in any predictions themselves, but in how this information is interpreted. Bias and discrimination are not inevitable. 

As the erroneous health costs paper also points out, one way to overcome AI’s potential for unintentional bias is through ‘open science’ — for instance, by ensuring relevant AI research and methodology is open for rigorous evaluation by all who use or are impacted by it. Another of the reports above similarly indicates how AI can be a “force for good” by instancing how “algorithms that learn from patients’ pain experiences can find new sources of knee pain in X-rays that disproportionately affect Black patients — and are disproportionately missed by radiologists”. 

Again, such knee-jerk rejection of obvious biological aspects of race has the potential to further exacerbate existing unfair disparities. Indeed, fueling suspicions about inherent racism and prejudice within medical science creates a vicious circle by deterring non-white groups from participating in research. Lack of diverse data and over-representation of those with European ancestry — as with the genetic biobanks above — would then, in turn, impede greater appreciation of diseases with higher prevalence in specific non-white ancestry groups, ultimately harming patients who could benefit from targeted treatments. 

Given that much of this potentially harmful obsession with racism comes from without the scientific community, medical researchers should perhaps take the following remedy for potential AI prejudice with a large grain of salt: “We need to bring social scientists into the picture.” 

Progressive prejudice?

Human behaviorist Ian Leslie is scathing about the social scientists behind the plague victim study and the distorting influence that these kinds of ideas have on the academic discipline of history: 

Whole fields of historical study seem to have turned into competitions for who can generate the most eye-catching narrative of identity-based injustice, and if that means making blatantly implausible empirical claims, so be it.

But here this distortion (in addition to the debilitating relativism critiqued above) extends further, into the wider scientific enterprise. As an example: the plague study authors emphasize that “Black Methodologies Matter”. While this may sound suitably progressive, it is a patronizingly racist claim that assumes all black researchers would follow and approve of the same methodologies. While scholars from traditionally underrepresented backgrounds can and do provide valuable insights into previously overlooked biases, framing research along racial or ethnic lines implies there are inherent, monolithic “white” or “black” (or “Asian” or “Native American” or a myriad other) methodologies.  

It also segregates science, suggesting that black people are innately suited to studying black subjects, Asians Asian ones, whites white subjects and so on — the kind of thinking that would not have been out of place in apartheid South Africa. 

The plague study authors take this one step further by stating their “hope to prioritize the methodologies of Black feminist archaeology”. Ignoring the implication that ‘feminism’ too is monolithic (it isn’t), where does this process stop? What about disabled, say, or transgender feminists from different ethnic groups — do they also have methodologies that must also be “prioritized” in scientific studies? 

Ironically, highlighting ever-finer ‘intersections’ of identity simply reflects a standard right-wing trope on the absurdities of identity politics, that of a “disabled, black, lesbian woman” at “the top of a hierarchy of oppression”. The reality is that scientists and researchers are individuals, and the complex interplay of influences reflected in their work is not reducible to simplistic ‘identity’ labels.

Black feminist researchers can certainly bring valid perspectives to science, but the idea of “Black feminist methodologies” is ideological not scientific. Science is universal and cross-cultural, and it does not care about scientists’ identity. Rational, evidence-based scientific inquiry does not discriminate on the basis of who you are. All that matters is whether researchers’ methods and conclusions withstand scrutiny. While racist (or sexist or other prejudiced beliefs) have certainly influenced science — and scientific institutions have indeed followed discriminatory practices in the past — the hallmark of the scientific enterprise is that it is self-correcting. 

Eventually, erroneous assumptions are found and weeded out through other scientists’ ongoing rigorous critiques. Thus, the claim that science is inherently ‘white’ or male is ludicrous; indeed, those who trumpet such beliefs perpetuate the prejudiced nonsense that science may be beyond the capabilities of those who are not white or Western or male — or, even more blatantly racist and sexist, that only white males can successfully practice science.

Potential dangers of anti-racist ideology

The study of plague victims exemplifies how anti-racist activism can harm those it purports to support. It also shows how researchers, research efforts and funding can be pushed toward currently fashionable causes rather than towards useful findings. 

For instance, fourteenth-century England was a deeply stratified society and it has long been apparent that differences in socio-economic status lead to differences in health outcomes. Health disparities have persisted into the modern era, even as overall health and life spans increase — the rich (of whatever ‘race’) are still healthier and longer-lived than their poorer peers. 

Studies of the effects of plague in historical and deeply unequal societies could, therefore, add to our understanding of both the impact and causes of disease on those at different social levels. The more we find out about how and why this occurs, the better able we will be to help those most affected by disease and ill health.

Compare this to the research critiqued above, a study that presupposes (on ideological grounds) that all health disparities are due solely to racism. This automatically precludes other, potentially more significant factors. 

A recent example of the limitations of focusing solely on social determinants of health – discussed in greater detail in the Entine/Whittle article cited above – is the contrasting impact of COVID-19 on black people in the US (with disproportionately high mortality) and in Africa (where the impact seemed puzzlingly limited). Ignoring possible genetic factors limits our ability to fully understand the complex interactions between the environment (including racism and marginalization) and biology. This hinders our search for effective solutions. 

Similarly, if we refuse to acknowledge any biological differences between racial populations, we will find it difficult to explain, let alone effectively address, the disproportionately high impact of HIV/Aids on black Africans compared to other racial groups. In short, racial identity could be a factor in why some people and not others suffer from disease. 

Obsessing over racial categories while trumpeting questionable ideological beliefs does a disservice to science and, more importantly, to those from racial minority groups who currently gain least from advances in scientific and medical practice.

Patrick Whittle has a PhD in philosophy and is a freelance writer with a particular interest in the social and political implications of modern biological science. Follow him at patrickmichaelwhittle.com. Find Patrick on X @WhittlePM

Video viewpoint: Humans as frogs? Robert F. Kennedy, Jr. claims that unsubstantiated but perceived rise in male sexual dysphoria is driven by atrazine in our water supply

untitled design
Conspiracy nut Robert Kennedy, Jr. is desperately trying to convince the American voting populace that he is indeed a serious candidate, all while doing his very best to play third party spoiler and gift wrap the election to the disgraced former president Trump. Someone share with Aaron Rodgers’ hand chosen candidate that a reliance on the most bizarre Russian invented conspiracy theories won’t get the job done.

According to Kennedy, there is something in the water. In fact, the son of Bobby Kennedy, believes that the water is making Americans gay and trans, entirely feminizing a population of men in the process. Yes, the apple has fallen far from the tree. Although Kennedy is doing his very best to distance himself from the most bizarre conspiracy theories he has shared over the years, Kennedy has been conveying the following for awhile:

I think a lot of the problems we see in kids, and particularly boys – it’s probably underappreciated how much of that is coming from chemical exposures, including a lot of sexual dysphoria that we’re seeing. They’re swimming through a soup of toxic chemicals today and many of those are endocrine disruptors. There’s atrazine throughout our water supply. Atrazine, by the way – if you in a lab put atrazine in a tank full of frogs, it will chemically castrate and forcibly feminize every frog in there. And 10% – the male frogs will turn into fully viable females able to produce viable eggs. If it’s doing that to frogs, there’s a lot of other evidence that it’s doing it to human beings as well.

Yes, a major presidential candidate just compared human beings to frogs. As you may have guessed, Kennedy does not have a medical background either, so he is hardly an authority on anything of this nature.

Shockingly, actual experts disagree with Kennedy’s studious assessment. Dr. Andrea Gore, professor of pharmacology and toxicology at University of Texas contends that sex for human beings is determined at conception and unlike frogs or other amphibians, can not be altered by the chemicals Kennedy references in his eloquent and thoughtful diatribe. According to Gore,

I don’t think people should be making statements about the relationship between environmental chemicals and changes in sexuality when there is zero evidence.

Dr. Linda Kahn, professor of pediatrics and population in health at New York University, got right to the point

Comparing humans to frogs is an apples to oranges kind of thing. Humans metabolize atrazine and excrete it from the body within 12 hours.

[Genetic Literacy Project editor’s note: RFK, Jr is wrong in endorsing claims by some environmental activist groups that atrazine or other chemicals found at micro-trace levels in drinking water — far below what the EPA or any serious oversight regulatory agency believes could cause any harm — “feminize” boys/men. The Genetic Literacy Project and other media have reported extensively on claims by the University of California-Berkely biologist gone rogue, and found his claims unconvincing (here, here). For more than 20 years, Hayes has refused to release his “data” for review by the EPA or independent scientists whose own studies do not document the feminization Hayes claims to have found. The EPA has reviewed, re-reviewed and re-re-reviewed his claims under multiple administrations and has found no evidence that atrazine is dangerous at the low levels we encounter it in the environment, and it certainly does not feminize males. In 2007, the EPA wrote a scathing 321-page review of Hayes’ claims, and found no data — Hayes’ or anyone else’s — in support of the thesis that RFK, Jr. swallowed and propagates. Since that EPA report, and refusing to accept the global science consensus, Hayes in cohort with the kooky-left wing of the environmental movement — Environmental Working Group, Center for Food Safety, Natural Resources Defense Council and other anti-chemical activists, including a tiny but vocal minority of scientists and ambulance-chasing lawyers (e.g., RFK,Jr.) —have kept this bogus claim alive partly because mysterious illnesses can be claimed and blamed on ‘chemicals’ even without hard data. — Jon Entine, GLP]

The man whose campaign began in Steve Bannon’s basement is wrong, as in factually incorrect, but like other paid Russian propagandists that won’t stop Kennedy from disseminating this dangerous propaganda to an audience of millions. These millions are eager for any disinformation that will allow them to further hurt the LGBTQ community.

Kennedy’s famous family, meanwhile fully endorse President Joe Biden for 2024 and that says a whole lot.

gjodewfxyaaodlq
Junior’s family with :coughs: President Joe Biden

A version of this article was originally posted at It’s The Russians Stupid and is reposted here with permission. Any reposting should credit both the GLP and original article.

Viewpoint: US regulators are not keeping up with lightning-fast advances in biotechnology. How can that gap be closed?

biotech startup failure advice x

Improved crop genetics can help protect crops from pests and disease, reduce food waste, increase yields, limit deforestation, and decrease agricultural greenhouse gas emissions. Improvements in crop genetics have contributed roughly half of historical yield gains, and biotechnology is an increasingly important genetic tool used by developers to continue this success.

Biotechnology is accelerating improvement of specialty crops — most fruits, vegetables, and tree nuts — which have historically received less attention compared to commodities like corn and soy. Recent regulatory changes implemented by the US Biotechnology Regulatory Service (BRS) have reduced regulatory burdens and duplicative regulation faced by agricultural biotechnology companies. By streamlining gene editing regulation for crops, the SECURE Rule allows companies, as well as small developers like university labs and start-ups, to commercialize innovative products much faster.

But, these recent regulatory changes are just a first step. BRS must do more to support the use of crop biotechnology to improve US agriculture by further streamlining application review, clearing a backlog of applications, and keeping up with the increasing pace of submissions. Only with further improvements can BRS enable innovative developers to unlock the kinds of genetic breakthroughs capable of productivity and environmental improvements in a wider array of crop varieties.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Why do biotech crops matter and who regulates them?

Of all tools for crop genetic improvement, only biotechnology is subject to premarket regulation in the US. Three agencies have jurisdiction over different aspects of pre-market biotechnology regulation: USDA for plant health, FDA for food safety, and EPA for pesticides. Biotech crops are also subject to post-market regulations that protect farm workers, livestock, consumers, and the environment under the same three agencies.

The USDA Animal and Plant Health Inspection Service (APHIS) oversees regulations for some organisms developed using genetic engineering to ensure they do not pose a risk to plant health, which BRS then implements. USDA oversight is important to protect the health of both agricultural crops as well as wild plants in uncultivated areas. In addition, global trust in APHIS biotechnology regulation eases international trade of products of biotechnology produced in the US. For consumers, credible regulation by APHIS provides reassurance of the safety of biotech products.

APHIS published a final rule in May 2020 to update its biotechnology regulations under 7 CFR Part 340 for the Movement of Certain Genetically Engineered Organisms. The SECURE rule, as it’s known, streamlined USDA biotech regulation by reducing application requirements and exempting some low-risk products of gene editing from oversight. The rule also set new target review timelines in which the agency should issue decisions for biotech product submissions. These range from 45 days for a permit for interstate movement or importation to 15 months for the second step of regulatory status reviews.

Developers of biotech crops must include various types of information and data for different regulatory submissions to APHIS. These submission types include regulatory status reviews (RSR), confirmation requests, and permit applications. Submissions to request a permit for interstate movement or importation are the simplest, requiring developers to provide information on the amount of material, genus and species, precautions taken to prevent release, and description of the intended genotype and phenotype.

Developer requests for RSRs are more intensive, requiring detailed information on the non-biotech comparator plant, the genetic change in the biotech plant, and the new traits of the biotech plant for an initial review, and whatever additional information APHIS requests for a subsequent full risk analysis.

The more data requirements, the more expensive for developers to prepare a submission. Submissions are required for every new biotech crop variety that does not fall within existing exemption categories. Recently, APHIS proposed five additional exemption categories for genetically modified plants that could be developed through conventional breeding.

Emerging trends under new and improved BRS regulations

All aspects of the new rule went into effect between August 2020 and October 2021. In the years since, BRS’ implementation of the updated regulations has proved to increase both the number of applications submitted by small and medium sized biotech developers, and the diversity of plants and traits of focus.

At the end of 2023, BRS presented new data showing that the percentage of small and medium sized biotech product developers — compared to major biotech companies like Bayer — has increased under the new rule. These developers include Ohalo Genetics, Moolec Science, and the University of California, Davis.

BRS also presented data showing that, under the new rule, the agency received submissions for a wider diversity of plants and traits. Recent completed regulatory status reviews include potatoes with disease resistance and altered nutrition, teff that can withstand more wind without falling over, and walnuts with disease resistance. Exemption confirmations include blackberry, pennycress, citrus and sorghum, in addition to cotton, soybean, and corn.

Well before the US, Argentina was the first country in the world to exempt many gene edited crops from existing GMO regulations, and a study of the first 4 years after implementation showed similar outcomes to those BRS presented — with increases in submissions made by smaller developers and for a greater number of products.

Remaining barriers

Fast and reliable RSR and permit application reviews are crucial for biotechnology innovation. Since implementing the new rule, BRS has increased the efficiency and pace of its reviews for permit application and exemption confirmations. While completing RSRs, however, the agency continues to struggle with delays.

During FY2023, BRS regularly did not meet the review timeframes outlined in the new rule for RSR applications. The agency completed less than 20% of its RSRs within the set timeframe — 180 days and 15 months, respectively, for the first and second steps of RSR — leaving 80% of applications delayed in FY2023. The agency completed 21 initial reviews (compared to 3 in FY2022) and indicated there is currently a backlog of “less than 50” RSRs.

BRS has had more success meeting timeframes set in the new rule for processing confirmations — 120 days — and permit applications — 45 and 120 days for interstate movement or importation, and release into the environment, respectively. In FY2023, all confirmation requests were processed on time, and 10% of permit reviews were delayed. BRS should endeavor to maintain these low rates of delay across all of its regulatory reviews, including for RSRs.

Longer reviews make regulatory compliance expensive for developers, biasing participation toward large developers and commodity crops. The uncertainty associated with unreliable timelines disproportionately hurts small developers with few or no other products to rely on while awaiting a decision. When regulatory burdens are too much for small developers to shoulder, it limits the diversity of products reaching the market, stifling innovation and ultimately depriving farmers of important tools to fight pests and disease and increase productivity.

Next steps for BRS

Reducing the burden of premarket regulation for biotech crops supports a wider diversity of developers working with more varied traits and crop species. Further reduction in premarket regulation for biotech crop traits that present low plant pest risk should continue this diversification.

In addition to further relieving regulatory burdens, BRS must address the significant percentage of RSR applications experiencing delays, curtail backlogs, and keep pace with the increasing rate at which the agency is receiving applications. Congress will need to ensure consistent and reliable funding for APHIS and BRS to enable these improvements over the next several years. This will ensure the agency is equipped with the resources and staff with specialized expertise needed in order to complete reviews as expeditiously as possible and further minimize delays.

Emma Kovak is a senior Food and Agriculture Analyst at Breakthrough. Find Emma on X @EmmaKovak

Emily Bass is Federal Policy Manager for the Food and Agriculture program at Breakthrough. Find Emily on X @emilyjane_bass

A version of this article was originally posted at the Breakthrough Institute and is reposted here with permission. Any reposting should credit both the GLP and original article. The Breakthrough Institute can be found on X @TheBTI

Viewpoint: Here’s how genetically engineered fruits and vegetables will soon emerge as a grocery store ‘selling point rather than a scare tactic’

screenshot am
Fruits and veggies are nature’s gift to humanity. Chock full of vitamins, delicious and colorful, they deserve a starring role in our diets. But some things tend to get in our way, like seasonality, cost, availability, and inconsistent or off-putting flavor. When we’re also surrounded by cheap, delicious, and ubiquitous processed foods, it’s all too easy to reach for the chips instead of the cherries.

But now, thanks to new genomic techniques, we’re starting to see a wave of bioengineered produce that enhance the nutritional value or accessibility of the original varieties. To name a few examples: there’s the Norfolk purple tomato in the U.S., which incorporates two genes from snapdragons to increase production of anthocyanin in the tomato, a rich source of antioxidants. There’s the high-GABA tomato in Japan, which uses CRISPR to quadruple the level of that amino acid, which can help lower blood pressure. There’s the Arctic Apple, which uses RNAi to knock out the apples’ own gene that causes it to brown when bruised or sliced. These sliced apples have an extended shelf life of 28 days and result in reduced food waste. And there’s the CRISPR’d salad mix that removes the wasabi-like flavor from mustard greens, which have double the nutritional value of romaine lettuce.

“If you look five years into the future as the gene editing market expands, there should be hundreds and hundreds of products by that point,” says Jon Entine, executive director of the non-profit Genetic Literacy Project, which focuses on biotech in medicine and agriculture. “You might even see sections of grocery stores that highlight this in a positive way.”

Genetically engineered foods as a selling point rather than a scare tactic would be a welcome and remarkable shift for a culture that has erroneously demonized it for years, going back to Golden Rice.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

A decades-long odyssey

One of the original products that set out to improve people’s health through bioengineering was Golden Rice. In the late 1990s, several European scientists discovered how to genetically modify rice to produce beta carotene, which the human body converts to Vitamin A, an essential nutrient that is missing in the diets of many people in lower-income countries.

Golden Rice “has potential, if adopted widely, to reverse vitamin A deficiency, which affects 125 million children worldwide,” says Adrian Dubock, executive secretary and member of the Golden Rice Humanitarian Board. According to the WHO, an estimated 250,000 to 500,000 who are vitamin A-deficient become blind each year, and half die within a year of going blind. “If you can get something into staple food especially at no cost and with no detriment, this can really make a big difference.”

Yet the rollout of Golden Rice was notoriously hampered by anti-GMO activists like Greenpeace, who protested the use of the technology on false grounds, claiming that only natural foods are safe. They even broke through fences to destroy test crops. In fact, The World Health Organization, National Academy of Sciences, and other major science organizations, including the FDA, have found no evidence of harm posed by genetically engineered foods on the market, and have deemed them as safe as conventional foods.

def b gp stoa q
Credit: Greenpeace

Finally in 2021, more than two decades after its development started, the first country in the world approved the use of Golden Rice: the Philippines. They harvested 100 tons of the first Golden Rice planted there last year, and more is planned for this year. In addition, they are field testing biofortified high iron and high zinc rice in combination with Golden Rice, to be produced at the same price as white rice. The reception in the country has been “extremely good,” so far, according to Dubock. “But it’s only one country so that’s a disappointment to the inventors and myself, having worked assiduously for so long. There’s no doubt that the reason for it is absolutely the GMO concerns and suspicions raised by Greenpeace.”

Perhaps now the tide is finally turning.

Changing market dynamics

Hostile attitudes in the 1990s in the U.S. and Europe toward GMOs, which are engineered by adding helpful genes to a seed, set the stage for a long fight for consumer acceptance.

The first wave of products mainly benefited farmers, such as Bt corn, a GMO corn that is the nutritional equivalent of regular corn and prevents insect and mold damage. Most of the corn and soy grown in the U.S. is GMO. It takes around 7-10 years and roughly $120 million to get a new trait approved, so large companies like Monsanto (which was acquired by Bayer in 2018) focused on high-volume traits in products like corn, soy and cotton.

“It’s been said that the reason the public has not embraced GMO crops is because most of the original traits were developed to benefit farmers,” Entine says. “But I think that’s a false argument as to why the public didn’t accept it. It was campaigns by so-called environmental groups that tried to make the case that genetically modified crops were somehow aberrational, that people could react in an adverse way. That was never true. They were attacking the fact that large corporations were developing these products, but they created the mess they complained about, and lobbied for very high restrictions so only big companies could afford to develop them.”

Now, the new products can be developed relatively inexpensively as newer methods like gene editing have come into play. For example, Entine points to a non-browning mushroom developed by Penn State for $45,000.

This opens the door for competition by smaller entities like startups and in narrower markets, like nutritional enhancement. Gene editing also speeds up timelines by tweaking precise genes to achieve a desired outcome.

Pairwise, a company that has received an investment from Leaps, developed the mild-tasting leafy greens and is now developing pitless cherries and seedless berries. Making dramatic improvements to tree crops like cherry can be difficult because of the long time it takes to conventionally breed woody tree species. Pairwise estimates that if you tried to make a pitless cherry with conventional breeding it would probably take a century, but with gene editing it could be possible in less than ten years. That’s why companies like Pairwise and Okanagan Specialty Fruits are applying the tools of genome editing to improve fruit tree crops with agronomic and consumer traits alike.

People becoming more open-minded

The reception has been positive for these early products. The reaction to the Pairwise salad mix was “way off the charts,” in a good way, Entine says. And the Arctic Apple has likewise been well received by consumers for half a decade since it came out commercially in 2019. There are now three apples that have the non-browning trait – Arctic Granny, Arctic Goldens, and Arctic Fujis. Non-browning Arctic Galas are coming in 2026.

maxresdefault
Credit: Arctic Apple

“We know from a number of independent studies that today’s consumers, especially GenZ and millennial consumers, are more open to food and technology. Especially if it aligns with their values around personal nutrition or sustainability,” says Sarah Evanega, Vice President for External Relations at Okanagan Specialty Fruits. “The consumer-focused traits coming forward today align with these values.”

Media trends are also consistent with these consumer studies. One study of media attitudes published in 2022 concluded, “Our results suggest that both social and traditional media may be moving toward a more favorable and less polarized conversation on ag biotech overall.”

Not only consumers, but also regulators and governments are starting to recognize that these technologies are essential for growing the food of the future. In Europe, the long-held strict stance against new genomic techniques is loosening. In the U.S., gene-edited crops are held to the same standards as conventional foods since they do not incorporate any “foreign” DNA, as some traditional GMOs do, so no additional scrutiny is required. In other words, the regulatory bar is lower because edited crops could have occurred via conventional breeding. In Africa, Nigeria and Kenya are taking the lead on developing new products, and at least seven African countries have liberalized their regulations toward GMOs in recent years, with gene editing right behind.

“From a regulatory perspective things are looking more positive,” Evanega says. “The innovation landscape is much more diversified now than it used to be, in terms of products and product developers. And now the tools of gene editing should, in a good policy and regulatory environment, allow us to innovate much faster, especially in highly nutritious specialty crops.”

In the near-future, I can imagine going to my local grocery store and finding varieties of healthy and delicious fruits and veggies that I never could have dreamed of growing up. I hope the time is finally “ripe” for us to embrace them.

Juergen Eckhardt is a medical doctor and venture investor in healthcare, biotech, and agriculture with more than 20 years of experience. In 2016, he joined Bayer to help start Leaps by Bayer, the impact investment unit focused on investments in breakthrough technologies in health and agriculture. In September 2023, he became Head of Pharma Business Development, Licensing & Open Innovation and a Member of the Executive Committee of Bayer Pharmaceuticals. 

A version of this article was originally posted at Forbes and is reposted here with permission of the author. Any reposting should credit the GLP and original article. Find Forbes on X @Forbes

GLP podcast: GE crops have lived up to the hype; Growing ‘mini’ organs from stem cells; How do we solve right-wing vaccine hesitancy?

v facts and fallacies cameron and liza default featured image outlined
Genetically engineered crops are nearly three decades olds at this point. What impacts have they had on agriculture over those nearly 30 years? Hint: they’re mostly positive. Scientists may be able to derive stem cells from amniotic fluid during a pregnancy and use them to treat birth defects before a child is born. Right-wing parental rights activists are leading a campaign to restrict school vaccine requirements. How do we convince these hyper-skeptical moms and dads that their kids should be vaccinated against preventable (and often deadly) diseases?

Podcast:

Join hosts Dr. Liza Dunn and GLP contributor Cameron English on episode 259 of Science Facts and Fallacies as they break down these latest news stories:

The first generation of genetically engineered crops were commercialized roughly 30 years ago. Since then, critics of the technology have predicted it would lead to serious public health and environmental harms, but none of those ever came to fruition. Nobody suffered so much as a stomach ache from consuming food derived from GE crops; however, studies have documented significant yield increases and notable decreases in prices at the grocery store. Let’s take a look back at the “frankenfood” controversy and examine why it came to an unceremonious end.

Doctors routinely collect amniotic stem cells during tests administered throughout pregnancy. Researchers have recently discovered that these cells can be used to monitor and maybe prevent potential health conditions that could materialize later in the pregnancy. They could also be grown into mini organs that regulators could use to improve the safety testing of drugs and other chemicals before they are commercialized. Importantly, these procedures are unlikely to provoke opposition from conservatives who oppose the use of embryonic stem cells, which has often been a major hurdle to advances in scientific research.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

On the heels of the COVID-19 pandemic, several states have proposed or enacted laws that require birth parents to consent before their children receive any vaccination. Such rules may seem uncontroversial, but they have generated unintended consequences. For instance, an adopted child may not receive routine immunizations because it’s not possible to get consent from his or her biological parents. Some in the science community argue that these vaccine restrictions are pushed primarily by right-wing parental rights groups who have been misled by social media misinformation. It’s therefore critical that scientists make a concerted effort to combat false claims about immunization. But what’s the best way to do that?

Dr. Liza Dunn is a medical toxicologist and the medical affairs lead at Bayer Crop Science. Follow her on X @DrLizaMD

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Visit his website and follow him on X @camjenglish

Curious what chickens cluck about? AI is decoding the language of poultry

Curious what chickens cluck about? AI is decoding the language of poultry
Have you ever wondered what chickens are talking about? Chickens are quite the communicators — their clucks, squawks and purrs are not just random sounds but a complex language system. These sounds are their way of interacting with the world and expressing joy, fear and social cues to one another.

Like humans, the “language” of chickens varies with age, environment and surprisingly, domestication, giving us insights into their social structures and behaviours. Understanding these vocalizations can transform our approach to poultry farming, enhancing chicken welfare and quality of life.

Our research at Dalhousie University applies artificial intelligence (AI) to decode the language of chickens. It’s a project that’s set to revolutionize our understanding of these feathered creatures and their communication methods, offering a window into their world that was previously closed to us.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Chicken translator

The use of AI and machine learning in this endeavor is like having a universal translator for chicken speech. AI can analyze vast amounts of audio data. As our research, yet to be peer-reviewed, is documenting, our algorithms are learning to recognize patterns and nuances in chicken vocalizations. This isn’t a simple task — chickens have a range of sounds that vary in pitch, tone, and context.

But by using advanced data analysis techniques, we’re beginning to crack their code. This breakthrough in animal communication is not just a scientific achievement; it’s a step towards more humane and empathetic treatment of farm animals.

One of the most exciting aspects of this research is understanding the emotional content behind these sounds. Using Natural Language Processing (NLP), a technology often used to decipher human languages, we’re learning to interpret the emotional states of chickens. Are they stressed? Are they content? By understanding their emotional state, we can make more informed decisions about their care and environment.

Non-verbal chicken communication

In addition to vocalizations, our research also delves into non-verbal cues to gauge emotions in chickens. Our research has also explored chickens’ eye blinks and facial temperatures. How these might be reliable indicators of chickens’ emotional states is examined in a preprint (not yet peer reviewed) paper.

By using non-invasive methods like video and thermal imaging, we’ve observed changes in temperature around the eye and head regions, as well as variations in blinking behaviour, which appear to be responses to stress. These preliminary findings are opening new avenues in understanding how chickens express their feelings, both behaviourally and physiologically, providing us with additional tools to assess their well-being.

Happier fowl

This project isn’t just about academic curiosity; it has real-world implications. In the agricultural sector, understanding chicken vocalizations can lead to improved farming practices. Farmers can use this knowledge to create better living conditions, leading to healthier and happier chickens. This, in turn, can impact the quality of produce, animal health and overall farm efficiency.

The insights gained from this research can also be applied to other areas of animal husbandry, potentially leading to breakthroughs in the way we interact with and care for a variety of farm animals.

But our research goes beyond just farming practices. It has the potential to influence policies on animal welfare and ethical treatment. As we grow to understand these animals better, we’re compelled to advocate for their well-being. This research is reshaping how we view our relationship with animals, emphasizing empathy and understanding.

file c v e
Understanding animal communication and behaviour can impact animal welfare policies. (Unsplash/Zoe Schaeffer)

Ethical AI

The ethical use of AI in this context sets a precedent for future technological applications in animal science. We’re demonstrating that technology can and should be used for the betterment of all living beings. It’s a responsibility that we take seriously, ensuring that our advancements in AI are aligned with ethical principles and the welfare of the subjects of our study.

The implications of our research extend to education and conservation efforts as well. By understanding the communication methods of chickens, we gain insights into avian communication in general, providing a unique perspective on the complexity of animal communication systems. This knowledge can be vital for conservationists working to protect bird species and their habitats.

As we continue to make strides in this field, we are opening doors to a new era in animal-human interaction. Our journey into decoding chicken language is more than just an academic pursuit: it’s a step towards a more empathetic and responsible world.

By leveraging AI, we’re not only unlocking the secrets of avian communication but also setting new standards for animal welfare and ethical technological use. It’s an exciting time, as we stand on the cusp of a new understanding between humans and the animal world, all starting with the chicken.

Suresh Neethirajan is a University Research Chair in Digital Livestock Farming at Dalhousie University. Find Suresh on X @sureshneethiraj

A version of this article was originally posted at the Conversation and is reposted here with permission. Any reposting should credit both the GLP and original article. Find The Conversation on X @ConversationUS

CRISPR gene editing applications are expanding dramatically in agriculture. Here are the latest advances

agriculture crispr
screenshot at  pm
Jennifer Doudna
Over the past decade, the release of CRISPR-Cas9 as a genome-editing tool, has revolutionized biological research. CRISPR and its applications have changed how biological research is done. A review by Nobel Laureate and CRISPR pioneer Jennifer A. Doudna of the Innovative Genomics Institute, University of California, Berkeley, and Joy Y. Wang, also from UC Berkeley, explores the origins, applications, and limitations of this technology. They discuss advancements, future directions, and real-world examples of CRISPR’s impact on medicine and agriculture, highlighting its potential to shape various aspects of society.

CRISPR-Cas9, short for clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR-associated protein 9 (Cas9), is the most widely used genome editing editor. This genome editing tool’s power comes from its chemical mechanism of DNA cutting at a site dictated by RNA-determined sequence recognition. CRISPR acts as a precise pair of molecular scissors that can cut a target DNA sequence, directed by a customizable guide. CRISPR allows scientists to rewrite the genetic code in almost any organism. It is simpler, cheaper, and more precise than previous gene editing techniques.

The CRISPR system is made up of two key parts: a CRISPR-associated (Cas) nuclease, which binds and cuts DNA, and a guide RNA sequence (gRNA), which directs the Cas nuclease to its target. It was discovered in bacterial immune systems, where it cuts the DNA of invading viruses and disables them. Once the molecular mechanism for its DNA-cleaving ability was discovered, it was quickly developed as a tool for editing genomes.

The first examples of engineered CRISPR-Cas involved transcriptional repression or activation to silence or up-regulate specific genes. Other forms of engineered Cas9 are fused to enzymes that enable individual nucleobase editing, chromatin modification, or sequence insertion. Other Cas proteins, including RNA-targeting proteins, have been explored as genome-modifying tools, enabled by discovery efforts and extensive biochemical and structural characterization. Some of these enzymes have also been harnessed for the development of imaging methods and diagnostic approaches.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

CRISPR advances

The past decade has witnessed the discovery, engineering, and deployment of RNA-programmed genome editors across many applications, including agriculture, healthcare, and other industries. CRISPR technology has enabled agricultural advances, including slick-coat cattle, red sea bream that grows larger, tiger puffer with increased appetite, high-oleic soybean, tomato with increased gamma-aminobutyric acid (GABA), high-starch maize, and reduced browning banana.

  • March 2019: High Oleic Soybean Oil called CalynoTM, developed by experts from Calyxt, Inc., became the first-ever gene-edited food product to successfully undergo review by the USDA and FDA and be commercialized in the U.S.
  • September 2021: Japanese startup Regional Fish Co., Ltd. began selling a gene-edited “Madai” red sea bream. The gene-edited fish was developed using CRISPR gene editing technology to knock out a protein that suppresses muscle growth. The edited fish has an edible part of about 1.2-1.6 times.
  • September 2021: Sanatech Seed Co., Ltd. and Pioneer EcoScience Co, Ltd. started the sale of their genome-edited tomatoes with increased GABA called Sicilian Rouge High GABA. The tomato was edited using CRISPR-Cas9 technique to contain four to five times more GABA, an amino acid believed to aid relaxation and help lower blood pressure.
  • October 2021: Regional Fish Co., Ltd. developed a genome-edited tiger pufferfish using CRISPR gene editing technology. The popular pufferfish known as “torafugu” was edited to increase the speed of growth.
  • March 2022: The United States Food and Drug Administration (FDA) cleared short-haired genome-edited cattle known as PRLR-SLICK cattle for meat production and human consumption. The product is low-risk and does not raise any safety concerns.
  • March 2023: Japan approved a high-starch maize variety, the fourth genome-edited food product that Japan did not subject to regulations for genetically engineered crops. The waxy gene in the said maize variety was deleted using CRISPR-Cas9 technology to increase its starch amylopectin proportion to almost 100%.
  • April 2023: Tropic, a pioneering agricultural biotechnology company in the United Kingdom, announced that their reduced browning gene-edited banana was determined to be a non-GMO by the Philippines Department of Agriculture-Bureau of Plant Industry. This banana is the first gene-edited product to go through the Philippines’ gene editing regulatory process.

Various applications of CRISPR technology have provided the foundation for clinical trials of therapies to treat sickle cell disease (approved by US FDA in December 2023), beta-thalassemia, the degenerative disease transthyretin (TTR) amyloidosis, and congenital eye disease, as well as planned clinical trials for both rare (progeria, severe combined immunodeficiency, familial hypercholesterolemia) and common (cancer, HIV infection) diseases.

crispr

Outlook for CRISPR and its applications

In the decade ahead, genome editing research and applications will continue to expand and will intersect with advances in other technologies, such as machine learning, live cell imaging, and sequencing. A combination of discovery and engineering will diversify and refine the CRISPR toolbox to combat current challenges and enable more wide-ranging applications in both fundamental and applied research. Just as during the advent of CRISPR genome editing, a combination of scientific curiosity and the desire to benefit society will drive the next decade of innovation in CRISPR technology.

Clement Dionglay is a Project Associate at International Service for the Acquisition of Agri-biotech Applications (ISAAA). Find Clement on X @the_archer

A version of this article was originally posted at International Service for the Acquisition of Agri-biotech Applications (ISAAA) and is reposted here with permission. Any reposting should credit both the GLP and original article. Find ISAAA on X @isaaa_org

Global ‘longevity hotspots’: What’s the secret of these blue zones?

grandmasrecipe sgrb
Ageing is an inevitable part of life, which may explain our strong fascination with the quest for longevity. The allure of eternal youth drives a multi-billion pound industry ranging from anti-ageing products, supplements and diets for those hoping to extend their lifespan.

If you look back to the turn of the 20th century, average life expectancy in the UK was around 46 years. Today, it’s closer to 82 years. We are in fact living longer than ever before, possibly due to medical advancements and improved living and working conditions.

But living longer has also come at a price. We’re now seeing higher rates of chronic and degenerative diseases – with heart disease consistently topping the list. So while we’re fascinated by what may help us live longer, maybe we should be more interested in being healthier for longer. Improving our “healthy life expectancy” remains a global challenge.

Interestingly, certain locations around the world have been discovered where there are a high proportion of centenarians who display remarkable physical and mental health. The AKEA study of Sardinia, Italy, as example, identified a “blue zone” (named because it was marked with blue pen), where there was a higher number of locals living in the central-eastern mountainous areas who had reached their 100th birthday compared with the wider Sardinian community.

This longevity hotspot has since been expanded, and now includes several other areas around the world which also have greater numbers of longer-living, healthy people. Alongside Sardinia, these blue zones are now popularly recognised as: Ikaria, Greece; Okinawa, Japan; Nicoya, Costa Rica; and Loma Linda, California.

Other than their long lifespans, people living in these zones also appear to share certain other commonalities, which centre around being part of a community, having a life purpose, eating nutritious, healthy foods, keeping stress levels low and undertaking purposeful daily exercise or physical tasks.

Their longevity could also relate to their environment, being mostly rural (or less polluted), or because of specific longevity genes.

However, studies indicate genetics may only account for around 20-25% of longevity – meaning a person’s lifespan is a complex interaction between lifestyle and genetic factors, which contribute to a long and healthy life.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Is the secret in our diet?

When it comes to diet, each blue zone has its own approach – so one specific food or nutrient does not explain the remarkable longevity observed. But interestingly, a diet rich in plant foods (such as locally-grown vegetables, fruits and legumes) does appear to be reasonably consistent across these zones.

For instance, the Seventh-day Adventists of Loma Linda are predominately vegetarian. For centenarians in Okinawa, high intakes of flavonoids (a chemical compound typically found in plants) from purple sweet potatoes, soy and vegetables, have been linked with better cardiovascular health – including lower cholesterol levels and lower incidences of stroke and heart disease.

In Nicoya, consumption of locally produced rice and beans has been associated with longer telomere length. Telomeres are the structural part at the end of our chromosomes which protect our genetic material. Our telomeres get shorter each time a cell divides – so get progressively shorter as we age.

Certain lifestyle factors (such as smoking and poor diet) can also shorten telomere length. It’s thought that telomere length acts as a biomarker of ageing – so having longer telomeres could, in part, be linked with longevity.

But a plant-based diet isn’t the only secret. In Sardinia, for example, meat and fish is consumed in moderation in addition to locally grown vegetables and traditional foods such as acorn breads, pane carasau (a sourdough flatbread), honey and soft cheeses.

Also observed in several blue zone areas is the inclusion of olive oilwine (in moderation – around 1-2 glasses a day), as well as tea. All of these contain powerful antioxidants which may help protect our cells from damage as we age.

Perhaps then, it’s a combination of the protective effects of various nutrients in the diets of these centenarians, which explains their exceptional longevity.

Another striking observation from these longevity hot spots is that meals are typically freshly prepared at home. Traditional blue zone diets also don’t appear to contain ultra-processed foods, fast foods or sugary drinks which may accelerate ageing. So maybe it’s just as important to consider what these longer-living populations are not doing, as much as what they are doing.

There also appears to be a pattern of eating until 80% full (in other words partial caloric reduction. This could be important in also supporting how our cells deal with damage as we age, which could mean a longer life.

Many of the factors making up these blue zone diets – primarily plant-based and natural whole foods – are associated with lower risk of chronic diseases such as heart disease and cancer. Not only could such diets contribute to a longer, healthier life, but could support a more diverse gut microbiome, which is also associated with healthy ageing.

Perhaps then we can learn something from these remarkable centenarians. While diet is only one part of the bigger picture when it comes to longevity, it’s an area we can do something about. In fact, it might just be at the heart of improving not only the quality of our health, but the quality of how we age.

Justin Roberts is a Professor of Nutritional Physiology at Anglia Ruskin University. Find Justin on X @drjustinroberts

Joseph Lillis is a PhD Candidate in Nutritional Physiology at Anglia Ruskin University. Find Joseph on X @LilJoeLill

Mark Cortnage is a Senior Lecturer in Public Health and Nutrition, Anglia Ruskin University. Find Mark on X @DrDiet14

A version of this article was originally posted at the Conversation and is reposted here with permission. Any reposting should credit both the GLP and original article. The Conversation can be found on X @ConversationUS

GLP podcast: Oops — heirloom seed company markets GM tomato; Anti-biotech movement in retreat; Bill Gates does more harm than good?

v facts and fallacies cameron and liza default featured image outlined
An heirloom seed company that prides itself on selling GMO-free products mistakenly put a genetically engineered tomato on the cover of its 2024 catalog. The anti-GMO movement appears to be in permanent retreat. Is the end of the “frakenfood” scare close? Bill Gates and other billionaire philanthropists do a lot of good around the world, but their critics say this generosity has a dark side.

Podcast:

Join hosts Dr. Liza Dunn and GLP contributor Cameron English on episode 257 of Science Facts and Fallacies as they break down these latest news stories:

Heirloom seed company Baker Creek put a purple-fleshed tomato on the cover of its 2024 catalog—only to later discover that it is almost certainly genetically engineered. That’s according to Norfolk Healthy Produce, which recently commercialized its bioengineered, high-antioxidant Purple Tomato.  Norfolk says the “remarkable similarity” between the two plant varieties spurred laboratory testing which “supports the fact that the only reported way to produce a purple-fleshed tomato rich in anthocyanin antioxidants is with Norfolk’s  patented technology.” The discovery provoked an angry Facebook post from Baker Creek and raised some awkward questions about marketing GMO-free seeds.

The anti-GMO movement was a cultural force to be reckoned with a decade ago, launching dozens of GMO labeling campaigns across the US and drawing thousands of people to “march against Monsanto” protests in major cities. Today, though, the one-powerful anti-biotech campaign appears to be in retreat as it loses critical regulatory battles in Europe and public interest in its cause wanes. What led to this development? Let’s examine five possible explanations for why anti-GMO activism fell by the wayside.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

The Gates Foundation seems to have been a force for good in the world since its founding in 1994. But not everyone is convinced. While the Foundation’s efforts to increase access to vaccines and boost food security around the globe are laudable, critics allege that such billionaire-backed philanthropies do more harm than good by siphoning money away from government-led efforts to address poverty. Is there any truth to this concern?

Dr. Liza Dunn is a medical toxicologist and the medical affairs lead at Bayer Crop Science. Follow her on X @DrLizaMD

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Visit his website and follow him on X @camjenglish

Viewpoint: ‘Regulatory vigilantes’ — How former government scientists who are now high-paid ‘expert witnesses’ for predatory law firms use mass tort litigation to sidestep science

c si wdsn ko v qju rvhr vgfbzewcvzpnnwphde
The regulatory risk management process has allowed policymakers to govern over the last 60 years during a time of great technological and industrial development. The process begins with a risk assessment, where government regulatory scientists gather evidence on an innovative process, substance or product. Measuring exposure to harms, potential benefits and risk reduction measures, they draw up a certain number of scenarios from which the risk managers make a decision (bringing into account other elements like jobs, economic impacts, societal values and potential for innovative developments). If the risk management process cannot protect populations from exposure to serious harm, then precaution should be taken. See the seven steps below.

substack post media s amazonaws
Credit: The Firebreak

Today, though, there are a group of regulatory vigilantes attempting to overturn the traditional risk management process. Many of them, like Bernard Goldstein or Christopher Portier, were government-employed regulatory scientists in the 70s, 80s or 90s when the scientific evidence in their portfolios was clear (eg, on exposure risks from tobacco, chemicals, pesticides…) and as they prepared their dossiers and submitted sound policy advice, they would then have to stand aside to watch industry lobbyists come in at the last minute, disrupt their work and keep products on the market in return for some political favors or by playing one agency against another.

Now that these regulatory scientists are retired, they are bitter over what had transpired during their careers.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Adversarial regulation

The concept of “adversarial regulation” relies on using the courts via mass tort litigation to change the way people and companies act rather than trying to change laws and public policies. This approach acts to counter what these former regulatory scientists perceive as the failed regulatory risk management process. See my analysis of the different adversarial regulation documents.

substack post media s amazonaws
Credit: The Firebreak

As the La Jolla Playbook shows, this litigious strategy was what finally changed the game with Big Tobacco. If you try to protect consumers via the regulatory risk management process, lobbyists would just overturn your evidence with a political donation or a friendly director in high places. But if you bring a company to their knees via mass tort litigations, you have their attention (or the attention of their shareholders) and their willingness to comply.

But there is just one little problem with this alternative adversarial regulatory strategy: It is not very democratic.

Actually, there are many more problems with this approach. Scientists may disagree over the actual level of risk (as seen with cases like glyphosate and talc), tort law firms may cut a few research integrity corners to get their hands on a pot of gold and other companies and industries might use this motley crew to handcuff the competition and gain market advantage (eg, the organic food or renewables industries). The noble quest for justice and consumer protection has quickly been blurred by the seedy dash for cash and unseemly special interests.

Adversarial regulator: Bernard Goldstein

Bernard Goldstein, one-time EPA Assistant Administrator for Research and Development during the Reagan administration, has been the leading advocate for a “legal” and “adversarial” approach to science-based regulatory issues. Using his base of Collegium Ramazzini research fellows, many of whom are retired US regulatory scientists now serving as highly-paid litigation consultants to US tort lawyers, the strategy now is to move scientific debates into the courtrooms (where non-specialists on a jury will then determine whether a product should be on the market or a company has a right to exist).

The threat of incessant litigation has been identified as a more powerful policymaking tool in the US than the European use of the precautionary principle. Goldstein, in correspondence with a Ramazzini colleague, Kurt Straif (see image), once referred to adversarial regulation as a “reliance on post-hoc litigation as a means to put industry in a preventative mode”. What does this “preventative mode” mean? That they will abandon certain products or markets? That they will ensure product safety and clear risk communication? This used to be the role of regulatory risk managers but it seems to have been replaced by a thug-like process of greedy tort opportunists bullying selective innovators into submission.

Is adversarial regulation really a better approach? Despite his long policy career, Bernard Goldstein does not seem to understand how research and data, at any level of involvement with politics, will get messy. Seeing how Trump weaponized the EPA with his political network can be frightening, as he recently wrote, but these momentary interruptions do not change the scientific facts and the methodology of self-correction. Goldstein used Trump as a reason to propose to further legalize the risk assessment process. But as this La Jolla series has seen with the recent US tort cases on glyphosate and talc, moving the process into the courtroom and leaving decisions on product safety up to non-specialist juries is even messier. The only reason this policy shift would be appealing is if you are more triggered by industry scientists than populist preachers.

substack post media s amazonaws
Using IARC to produce documents so Goldstein’s lawyers can sue industry (Credit: The Firebreak)

Goldstein, of course, has been caught with his pants down, using his connections to get the International Agency for Research on Cancer (IARC) to hold a third (unscheduled) working group to produce a monograph linking benzene exposure to Non-Hodgkin Lymphoma because the law firms he was working with needed to have a research document to justify a series of lawsuits claiming that. See image above. In an interesting exchange following my exposé, Goldstein indicated that he saw nothing wrong with his actions nor that his remuneration and conflicts of interest might have led him to act unscientifically.

Adversarial regulation in La Jolla

The La Jolla Playbook has little respect for the regulatory risk management process or, for that matter, the democratic policymaking approach. It is able to harness political results by exploiting the greed and opportunity of all interest groups involved. Having some self-interested litigation consultant scientists publish papers trying to legitimize this adversarial approach as more effective than the democratic regulatory process adds value to the activist ambitions of the La Jolla group.

The Ramazzini-focused circle of retired regulatory scientists provide a fertile ground of high-level expertise for tort law firms attacking big corporations They fervently believe that a small group of individuals, should, behind closed doors, be able to change policies that affects millions of people. They also bring to the conversation a high level of rage and resentment from their personal career failures.

Vengeance is a great motivator and opportunists in the tort law industry have been quick to capitalize on that.

David Zaruk is the Firebreak editor, and also writes under the pen-name The Risk Monger. David is a retired professor, environmental-health risk analyst, science communicator, promoter of evidence-based policy and philosophical theorist on activists and the media. Find David on X @Zaruk

A version of this article was originally posted at The Firebreak and has been reposted here with permission. Any reposting should credit the original author and provide links to both the GLP and the original article. Find Firebreak on Twitter @the_firebreak

glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists