Podcast: Sierra Club endorses biotech chestnut tree; GM salmon coming this April? Downside of genetic engineering

Historically a vocal opponent of genetically engineered crops, the Sierra Club has endorsed the release of a disease-resistant, genetically modified chestnut tree into America’s forests. US consumers could be eating GM salmon as soon as April 2021. Farmers have been growing biotech crops for over two decades. The technology has provided many benefits, but they have not come without costs.

Join geneticist Kevin Folta and GLP editor Cameron English on this episode of Science Facts and Fallacies as they break down these latest news stories:

After decades of delays caused by skittish regulators, advocacy groups and politicians, biotech firm AquaBounty is poised to finally commercialize its genetically engineered AquAdvantage salmon in the US. The salmon grows quicker than its wild relatives, consuming fewer resources and therefore cutting the environmental impact of the global fishing industry. Are there any remaining obstacles that could further delay the salmon’s release?
Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Insect- and herbicide-tolerant GM crops fundamentally reshaped the agricultural landscape in the 1990s. They helped many farmers cut their pesticide use, reduce production costs and greenhouse gas emissions. But these benefits came at a price, namely insects and weeds that began rapidly developing resistance to the chemicals designed to kill them. As the pests evolved, so did the crops and weedkillers engineered to control them. The battle is ongoing, with farmers and scientists plotting their next steps in this arms race.

“[I]t has become possible to directly edit genes and regulatory elements at the molecular level …. which present new risks. Indeed, the annual worldwide threat assessment report of the U.S. intelligence community (2016) added gene editing to its list of ‘weapons of mass destruction.’” So says the Sierra Club of new breeding techniques, including CRISPR-Cas9. The environmental group also has a long history of opposing transgenic (GMO) crops, mostly because they were initially developed as for-profit products by big companies like Monsanto.

Despite this general stance against crop biotechnology, the Sierra Club recently endorsed the planting chestnut trees engineered to survive a deadly pathogen that has all but eliminated the tree from America’s forests. Considering the promise it holds, the Club noted in a recent article, the GM chestnut tree is worth the (mostly hypothetical) risks it poses. Could this signal a change in how environmental groups think about genetic engineering?

Subscribe to the Science Facts and Fallacies Podcast on iTunes and Spotify.

Kevin M. Folta is a professor in the Horticultural Sciences Department at the University of Florida. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the GLP’s managing editor. BIO. Follow him on Twitter @camjenglish

shutterstock

Climate change vs agriculture: Can one farming method—conventional, organic or agroecology—help slow global warming?

Climate change and agriculture appear joined at the hip. Agriculture—through methane, carbon dioxide, and land use—has had an impact on the global climate. And climate change—through drought, heat, and pests—will have an impact on agriculture, reducing yields and making farming more difficult.

But is there one type of agriculture—conventional, organic, “agroecological,” or with the use of genetic engineering—that’s more sustainable? Considering that all agriculture involves land use changes and other environmental alterations that lead to climate change, the answer is—yes, but with caveats.

Editor’s note: This is part one of a two-part series on climate change and agriculture.

The Intergovernmental Panel on Climate Change (IPCC) has reported that between 1850 and today, “Human-induced warming reached approximately 1°C (likely between 0.8°C and 1.2°C) above pre-industrial levels in 2017, increasing at 0.2°C (likely between 0.1°C and 0.3°C) per decade.” The IPCC recommends that further increases be held so that the earth’s overall temperature increases to no more than 1.5° C. However, climate scientists (including the IPCC) estimate that under current industrial (and agricultural) activity, global temperatures could rise between 1.5° and 5.5° C over the next 50 to 75 years. The Paris Agreement has a goal of keeping the temperature rise between 1.5° and 2.0° C.

Agriculture is a significant part of the “human induced” part of warming. A number of estimates show agriculture contributes anywhere between nine and 25 percent of greenhouse gases to climate change (the largest contributor is transportation, at about 22 percent). Other estimates show agriculture contributes 48 percent of methane and 52 percent of nitrous oxide from rice fields.

Screen Shot at PM

In the US, the federal Department of Agriculture predicts that, in turn, climate change will have an overall deleterious impact on crops and livestock. According to the USDA’s Fourth Annual Climate Assessment:

While some regions (such as the Northern Great Plains) may see conditions conducive to expanded or alternative crop productivity over the next few decades, overall, yields from major U.S. crops are expected to decline as a consequence of increases in temperatures and possibly changes in water availability, soil erosion, and disease and pest outbreaks.

For small-scale farmers in developing countries, the effects of rising temperatures look worse. “Extreme weather events such as droughts, heat waves and flooding have far-reaching implications for food security and poverty reduction, especially in rural communities with high populations of small-scale producers who are highly dependent on rain-fed agriculture,” wrote an international team from Cornell University, CIMMYT in Mexico, the USDA and the World Bank. “Climate change is expected to reduce yields of staple crops by up to 30% due to lower productivity and crop failure.”

Moreover, the Paris goal of 1.5- to 2.0-degree temperature rise may be impossible without changes in food supplies, including agriculture. An Oxford University-led study published in Science stated that major changes in the “food system” will be necessary—the main components they identify as part of this system are:

  • Land clearing and deforestation; carbon dioxide (CO2) and nitrous oxide (N2O)
  • Production and use of fertilizers and other agrichemicals; CO2, N2O, and methane (CH4)
  • Enteric fermentation during production of cows, sheep, and goats; CH4
  • Production of rice paddies; CH4
  • Livestock manure; N2O and CH4
  • Fossil fuels in food production and supply chains; CO2.

A technological crossroads

So, what’s a poor farmer to do? Or even a rich one?

Obviously, it’s necessary to modify production and farming management to mitigate the contribution of greenhouse gases. These modifications may include, as the Cornell-led team wrote:

Adjusting planting time, supplementing irrigation (when possible), intercropping, adopting conservation agriculture, accessing short- and long-term crop and seed storage infrastructure, and changing crops or planting more climate resilient crop varieties.

There are three basic ways farmers can achieve these, and politics has started to make them mutually exclusive: organic, conventional (with genetic engineering), and conventional (without GMOs). Another category, “agroecology,” is either a real, science-based approach or “fightin’ word,” depending on whom you ask.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

A recent paper in Nature Communications by a German team claimed that, once its products are priced correctly, organic methods are just as sustainable (in terms of reducing greenhouse gas emissions) as conventional farming, particularly when discussing livestock. Animal products, the authors noted, produced the highest “external” greenhouse gas costs, both for organic and conventional methods, and the lowest costs were organic plant-based products. So, is organic better for GHG overall? The paper implies “yes.”

But other scientists say, “Except for animals, probably not.”

cow global warming
Credit: Science ABC

The Nature paper, along with other non-academic initiatives such as the European Union’s “Farm to Fork” and “Green” efforts, which are supportive of organic over conventional (and GMO) for reducing greenhouse gas emissions, ignore several key elements of GHG emission:

  • The fact that organic farming lags in yield up to 40 percent behind conventional (with or without GM). This means that to feed the world solely with organic food, more land would have to be converted to farmland to catch up. “Lower yields in organic production cause land-use change effects with significant carbon opportunity costs, which are typically not reflected in such calculations,” said Matin Qaim, Professor of International Food Economics and Rural Development at the University of Göttingen.
  • Sequestration of carbon. Accounting for output, “but not the CO2 that plants sequester…would raise the cost of organic production, as conventional here (GM) is done using zero-till, so the sequestration values would be higher,” said Stuart Smyth, associate professor of agriculture and bioresources at the University of Saskatchewan. (However, organic farming’s use of cover crops could boost carbon sequestration.)
  • The impact of pests. “I do not see anything in this report that mentions organic farming solutions to difficult pests and pathogens. The idea of making these price adjustments also seems a bit surreal–how will poor EU citizens manage?” asked Kathleen Hefferon, microbiology researcher at Cornell University’s College of Agriculture.

In general, Qaim said, organic loses to conventional (especially when factoring in GM innovations) for GHG emissions.

All the data I know suggest that organic farming is not the right strategy to reduce global GHG emissions. When the land-use change effects are factored in, organic farming can even have higher global GHG emissions than conventional alternatives (which is even more true when we consider the development and use of new breeding technologies, which are banned in organic farming).

But some more scientifically tested aspects of agro-ecology (or, as some call it, “regenerative ag”) could help blunt the production of GHGs—reduced tilling, cover crops, rotation of crops and targeted planting of perennials.

Just how much can any new techniques—genetic engineering and editing, conventional farming, and even agroecology–improve agriculture’s carbon footprint? We’ll cover that in Part 2.

Andrew Porterfield is a writer, editor and communications consultant for academic institutions, companies and non-profits in the life sciences. He is based in Camarillo, California. Follow Andrew on Twitter @AMPorterfield

Viewpoint: FDA’s burdensome animal gene-editing rules hinder innovation. USDA takeover could spur progress

Over the last two months, USDA made two revolutionary moves regarding the regulation of genetically engineered (GE) animals: to take over the regulation of GE animals from the FDA and to outline their proposed regulation of GE animals.

Aside from the dramatic inter-agency politics, loosening the regulation of GE organisms is better than the status quo, which excessively inhibits innovation. But the proposed rule doesn’t go far enough.

USDA’s proposed regulations would allow more GE animals to be developed and commercialized, an important step to addressing livestock’s environmental and animal welfare problems. Current regulation of GE animals under FDA is overly intensive — rather than proportionate to the actual risk of negative consequences — wasting government resources and making it excessively burdensome for the developers of GE animals to bring them to market. Compared to GE plants, which are already regulated by USDA with at least 131 approved (specifically, deregulated) since 1992, FDA has only ever approved two GE animals.

USDA’s proposed regulation supports innovation in animal breeding, which is key to reducing animal agriculture’s environmental impacts even as demand for animal products continues rising. The agricultural sector produces 9-10 percent of total US greenhouse gas emissions, with at least 42 percent coming from animal agriculture alone. A not-yet-commercialized GE dairy cow resistant to udder infection stays healthier, potentially allowing cows to produce more milk and thereby reducing emissions per gallon of milk.

Using genome editing and other modern breeding methods can also benefit animal welfare. Agricultural animals often suffer greatly under production conditions, such as dairy cows that undergo a brutal de-horning process. A not-yet-commercialized GE cow avoids this problem by never growing horns.

Credit: Hannah Walker Smith/Cornell Alliance for Science

In contrast to FDA’s overly cautious regulatory approach, which includes the same intensive review for every GE animal, USDA’s proposed rule has two levels of safety review for potential risks to human and animal health: an expedited one for GE animals with changes that mimic naturally-occurring ones, and a full safety review for all other GE animals. Using genetic engineering to mimic naturally-occurring animal traits may sound pointless; however, it is a quicker way to combine traits from two different breeds than conventional breeding and usually achieves indistinguishable results.

While the proposed rule is a huge improvement in the regulation of GE animals, it doesn’t go far enough.

First, USDA should regulate GE animal traits based on whether the product — like cows without horns — creates new risks, not on the process used to create them. The proposal, as with USDA’s approach to regulating GE crops— SECURE — determines the depth of regulatory inquiry based on the type of genetic change rather than the resulting trait. This is a poor proxy for the risks associated with the resulting animal trait. In fact, the majority of plant biotechnology experts prefer product-based regulation. In product-based regulation there could be an initial safety review, as SECURE has, to look for any way the trait could potentially harm animal or human health; if there is no plausible potential for harm, then the animal is deregulated. If there is, USDA conducts a further safety review to determine whether harm actually results in practice.

Second, requiring any safety review — even an expedited one — for GE animals with changes that mimic naturally-occurring ones is unnecessary. If a genetic trait for disease resistance exists in some cows, then it doesn’t matter whether breeders use conventional methods or genetic engineering to combine it with traits for good milk production and make a healthier dairy cow. After all, there is no pre-market regulation for new conventionally-bred animals. Our system accepts some risk that conventional breeding may result in harmful traits because the risk has proven to be very low, and because we have many mechanisms for regulating animals and animal products once they’re on the market. USDA should take the same approach with animals as it takes with plants, exempting those that could have been produced using conventional breeding from pre-market regulation.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Compared to current FDA regulation of GE animals, USDA’s proposed regulation is more proportional to the potential risks involved, creates fewer unnecessary barriers to commercialization, and reduces government waste. It will allow more GE animals to pass through regulation and reach the market, providing producers with additional tools to mitigate the environmental impacts and welfare challenges of animal agriculture. However, it doesn’t go far enough and should be more similar to USDA’s SECURE rule for GE plants.

Emma Korvak is a Food and Agriculture Analyst at the Breakthrough Institute. She has been involved in communication and science policy surrounding genetic engineering since the beginning of graduate school, and more recently with the Genetic Literacy Project. Emma received her PhD in Plant Biology from the University of California, Berkeley. Find Emma on Twitter @EmmaKovak

A version of this article was originally posted at the Breakthrough Institute and has been reposted here with permission. The Breakthrough Institute can be found on Twitter @TheBTI

shutterstock x

‘Clean’ seafood? The global struggle to make aquaculture a sustainable source of protein

Surprising to many, over half of the fish we eat comes from fish farms. But some farms, particularly from parts of Asia, operate in a hazy, underground seafood supply chain with little to no health or safety standards and zero traceability. With our rising need for fish, are we willing to pay more for high quality and transparent aquaculture production?

While fish is still a secondary choice in protein in the U.S., coming in behind poultry, beef, and pork, it is projected to be a growing industry. How can we ensure our seafood comes from sustainable fish farms, like those in Europe, Canada, New Zealand, and Australia? And how will domestic production react to this need for more local, sustainable, and traceable ways to farm fish?

When it comes to seafood, the average American diet is about as limited as it gets.

We eat salmon, crabs, lobsters, shrimp, and scallops and that’s about it for most people, with some pollack and cod thrown in for good measure. You’re likely to get far more variety in a bowl of seafood stew at a restaurant than you do in the typical American’s seafood diet.

Not only do we lack variety, but volume as well. According to the National Oceanic and Atmospheric Administration, the average American consumed a little over 16 pounds of seafood in 2018. Although that figure has been steadily rising for several years, it still pales in comparison to the 94 pounds of chicken, 58 pounds of beef, and 52 pounds of pork we consume every year.

aquaculture seafood mkt size

The Food Marketing Institute’s 2019 Power of Seafood Survey found that 56% of Americans eat seafood twice a month. Freshness, flavor, and information about the product all play a major role in the decision to buy a piece a fish at the grocery store, along with the understanding of how to cook and enjoy it once they get home.

Some recent studies have suggested that a large and growing segment of the U.S. is interested in eating more shellfish and finfish (the industry term for fish like salmon and cod), provided they can find it at a price and quality they expect. This aligns with the EAT-Lancet Commission, as they fully support increasing seafood consumption for a healthy diet – as long as it’s sustainable.

Farmed fish – a “cleaner” option?

“The global demand for fish protein in people’s diets is growing and will continue to grow,” says Jacob Bartlett, CEO of Whole Oceans, a company raising sustainable Atlantic salmon in land-based facilities in Maine.

Companies like Whole Oceans hope to benefit from the rise of responsible fish farming by offering cleaner, more sustainable seafood than is commonly available from today’s aquaculture producers.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

There has long been a difference between farmed and wild-caught when it comes to seafood. Many consumers perceive that wild-caught products are “cleaner” and more sustainable. Farmed fish, however, has a reputation more associated with dirty pens, sick fish, and an overuse of antibiotics to compensate for all this. In reality though, wild-caught fish is fraught with sustainability issues and farmed fish can be a clean alternative – depending on its country of origin.

To be fair, there are environmental pros and cons to wild and farmed seafood. Though wild-caught fish require fewer resources, it’s not a long-term alternative – 90% of wild-caught fish are either fully or overfished. Aquaculture – done safely and sustainably – is a great way to support a healthy diet and a healthy environment.

An unsustainable system

As of 2018, wild-caught and aquaculture (farmed) seafood each made up roughly half of the world’s fish consumption. But that balance is expected to tilt increasingly toward farmed fish, which is easier to scale and overall more sustainable. Major grocers like Whole Foods and Trader Joe’s have been actively promoting its benefits to their customers.

us import volume fisheries x

But it hasn’t always been this way.

Aquaculture, or fish farming, was a $169 billion global industry as of 2015, and that’s on track to exceed $242 billion by 2022. It produces more than 80 million metric tons of fish annually from some 580 aquatic species and employs roughly 26 million workers around the world.

From 1990 to 2018, we’ve experienced:

+14% Rise in global capture fisheries production

+527% Rise in global aquaculture production

+122% Rise in total food fish consumption

Despite its size, however, the industry is largely concentrated in central and southeast Asia, with China dominating overall production, followed by Vietnam, India, Thailand, and others. According to the Food and Agriculture Organization of the United Nations, the Asia-Pacific region accounts for more than 85% of all aquaculture production, followed by Africa at 10% and Latin America and the Caribbean at 4%.

The U.S. ranks surprisingly low in aquaculture production at 17th in the world as of 2017, which equates to just 0.2% of total production. This is no surprise because the U.S. imports more than 80% of the seafood we eat. Most of the imports – in the order of volume – are shrimp, Atlantic salmon, tilapia, and shellfish.

aquaculture imports exports by country x

90% of the shrimp that Americans eat is imported from facilities in southeast Asia, while the vast majority of the farmed tilapia sold in this country comes from Latin America and Asia. Canada, Norway, Scotland, and Chile supply most of the salmon. Typically, smaller, more resilient species – tilapia, carp, trout, and salmon – are farmed, as their feed-to-growth ratio has been optimized through research and development.

No matter the farmed species, practices vary by country…and that’s part of the problem. Lax government oversight of the industry in countries like China, which make up the majority of all aquaculture producers, has created a two-tiered system: places like the U.S., Europe, Canada and elsewhere with industry practices with strict quality and environmental regulations, and those where the industry regulations are under-enforced.

Murky farming practices

This part of the aquaculture industry has a well-earned reputation for environmental contamination, poor working conditions, and poor health conditions for its fish. Raised in large, open-water pools, many of these unregulated farms are a hotbed for disease and pollution, and the chemicals and antibiotics often used to control these problems leach out into surrounding waters, affecting the local ecosystem, and just generally making matters worse.

That’s to say nothing about the working conditions at these facilities. For one thing, the parts of Southeast Asia – and now Africa – where much fish farming is conducted are a known hotbed for human trafficking, and a 2018 report by Human Rights Watch found widespread abuse in Thailand’s fishing industry, where migrants from all over the region are effectively sold into modern-day slavery.

Though the wild-caught seafood industry doesn’t have the safest labor practices, either. Because most fishing takes place in international waters, few regulations exist to keep the industry and its workers safe. It’s easy to exploit a vulnerable crew when out on the open seas for weeks at a time.

fishing slavery nations

It’s a grim picture…no one wants to eat fish that was packed into filthy open-water pens, fed a diet of farm waste, and hopped up on antibiotics before being harvested, processed, frozen, and flown halfway around the world to market.

These practices raised questions as recently as 2012, when it was reported that fish being fed a diet of pig waste was being sold to the U.S. market. Contaminants ranging from fish waste to antibiotic-enhanced feed, to parasites, chemicals, and more have been known to leak out of open-water facilities, impacting wild populations in the area.

According to the World Wildlife Fund, farmed species can even escape from their pens and interbreed with local wild stocks, throwing off the gene pool and further spreading disease.

Higher quality comes at a price

Proper regulation and safe working conditions are costly, positioning quality fish against the prices that many of these international producers can offer for their seafood. Higher quality operators in the U.S. and elsewhere are finding it difficult to compete.

It’s no wonder that the idea of fresh, healthy seafood is so foreign to most Americans.

“Our seafood supply chain is worse than broken”, says Eric Pedersen, founder of Ideal Fish, which is raising branzino fish in Connecticut in a sealed, land-based facility that hopes to bridge this gap by shortening the supply chain to reduce costs.

“We almost have no domestic seafood supplies. Almost everything we eat in the U.S. has been imported from abroad, flown thousands of miles, which means a tremendous diminution in the quality, freshness, and shelf life of the seafood.”

At the same time, Pedersen says, often we don’t even know where it’s coming from. Neither do the retailers we’re buying it from and the restaurants that are preparing it.

“You walk into most grocery stores and go to the seafood counter and it’s a sad experience,” he says.

Traceability is another concern that domestic aquaculture providers are working to overcome. As it stands today, most people know very little about the seafood they eat. Wild-caught fish often goes straight from the boat to a wholesale fish market, either locally or in cities such as New York and Seattle, where most seafood enters the U.S.

From there, it can go anywhere, from restaurants to grocery suppliers, to meatpacking and more. Fish buyers are a knowledgeable bunch, often tasked by their employers to identify and purchase products that are fresh and healthy, but that information is lost once that fish is loaded onto trucks for their next step in the process.

Farming – a solution

By leveraging fish farming to source some of this fish, proponents hope to introduce new layers of traceability to this traditional system. Fish sourced from a particular facility and bound for a particular customer can be tagged and traced, from pool to plate, using everything from blockchain technology to direct sales, in ways that the fragmented fish supply chain never has before.

That’s why there has been a push in recent years for more aquaculture production in countries where it can be produced with more regulatory oversight such as Norway, Canada, and the U.S. (the U.S. Department of Agriculture oversees aquaculture operations in this country).

In May 2020, former President Trump issued an executive order promoting American seafood competitiveness and economic growth to create jobs while eliminating illegal, unreported, and unsustainable wild-caught or farmed fish. This order also prompted offshore aquaculture as another solution for sustainable fish, resulting in the NOAA developing two out of the ten designated Aquaculture Opportunity Areas to develop fisheries.

Monterey Bay Aquarium’s Seafood Watch is a non-profit organization dedicated to helping consumers and businesses make choices for a healthy ocean. They even have a smartphone app consumers can access while food shopping.

Ryan Bigelow, Senior Program Manager with Monterey Bay says, “There’s increasing interest in knowing more about our food, having local sources, and aquaculture could certainly fill that niche.”

He’s quick to admit that U.S.-based producers won’t be able to compete on price due to the costs associated with running sustainable, regulated facilities, but the truth is we as consumers should also be questioning our consumption habits.

“That $15 all-you-can-eat shrimp plate, how is that possible?” Bigelow asks. “What’s happening in those pens, on that production line, that makes it possible to raise an animal on the other side of the world and ship it over for less than it costs to grow here?”

As with many things, the COVID-19 outbreak brought this reality into stark relief. Why is the U.S. relying so much on a hazy, underground seafood supply chain involving thousands of international suppliers when the technology exists to farm fish safely and sustainably here at home, or in countries that take pride in their aquaculture production?

Due to the gross lack of safety standards in some of the countries we import from, the FDA in recent years has discovered chemicals, carcinogens, antibiotics (often expired), and pesticides. Even more alarmingly, of the imported seafood, the FDA inspects less than 1% of it. And of that 1% the U.S. regularly rejects 50 to 60% of imports.

What can you do?

While regulations are being updated to increase imported food inspections to ensure quality, and efficacy, there are things we can do at the consumer level:

  • Check out Seafood Watch, the Monterey Bay Aquarium’s Seafood Guide – use their app at the seafood counter and see what you learn while you shop!
  • Look for packaging with Aquaculture Stewardship Council and Global Aquaculture Alliance labels that certify sustainable farms and seek out operations with best aquaculture practices worldwide
  • Beware of misleading statements on packaging, like “Prepared for” or “Packed by”, as this may not be the country of origin. Instead look for labels showing the fish are from the U.S., Canada, the European Union, Australia, or New Zealand as these countries have some of the safest seafood regulations.
  • Know your fish market! Buying from a local, trustworthy fishmonger can help to ensure the highest quality, as they will do the label and country sourcing for you.
  • Consider buying shrimp sourced from the U.S. and the Gulf of Mexico – it’ll be more expensive, but you can feel good about its quality and production.
  • Vary your seafood choices. Lower food-chain fish, like anchovies and sardines, are smaller and have had less time to accumulate contaminants than larger fish. Add farmed bivalve shellfish – oysters, clams, and mussels, to this list – eating lower trophic farmed fish is good for the environment and healthy for you.

The bottom line

While domestic aquaculture products will initially be more expensive, leaders in the industry are turning to proven systems to help scale up production. But we need to play a role as well — ask where your fish comes from and learn about seafood practices in various countries. Post-COVID, we’re going to demand local more than ever, and that goes for fish, too. Supporting local fisheries or countries practicing operations that provide transparency and sustainability will help the global seafood industry move forward.

Tim Sprinkle is a writer and editor based in Denver, Colorado. whose work has appeared in Wired, The Atlantic, Entrepreneur and many other national publications. He is also the author of “Screw the Valley: A Coast-to-Coast Tour of America’s New Tech Startup Landscape.” Find Tim on Twitter @timsprinkle

A version of this article was originally published at Dirt To Dinner and has been republished here with permission. Find Dirt To Dinner on Twitter @Dirt_To_Dinner

Human evolutionary timeline: Key moments in the emergence of our species

The long evolutionary journey that created modern humans began with a single step—or more accurately—with the ability to walk on two legs. One of our earliest-known ancestors, Sahelanthropus, began the slow transition from ape-like movement some six million years ago, but Homo sapiens wouldn’t show up for more than five million years. During that long interim, a menagerie of different human species lived, evolved and died out, intermingling and sometimes interbreeding along the way. As time went on, their bodies changed, as did their brains and their ability to think, as seen in their tools and technologies.

To understand how Homo sapiens eventually evolved from these older lineages of hominins, the group including modern humans and our closest extinct relatives and ancestors, scientists are unearthing ancient bones and stone tools, digging into our genes and recreating the changing environments that helped shape our ancestors’ world and guide their evolution.

These lines of evidence increasingly indicate that H. sapiens originated in Africa, although not necessarily in a single time and place. Instead it seems diverse groups of human ancestors lived in habitable regions around Africa, evolving physically and culturally in relative isolation, until climate driven changes to African landscapes spurred them to intermittently mix and swap everything from genes to tool techniques. Eventually, this process gave rise to the unique genetic makeup of modern humans.

“East Africa was a setting in foment—one conducive to migrations across Africa during the period when Homo sapiens arose,” says Rick Potts, director of the Smithsonian’s Human Origins Program. “It seems to have been an ideal setting for the mixing of genes from migrating populations widely spread across the continent. The implication is that the human genome arose in Africa. Everyone is African, and yet not from any one part of Africa.”

New discoveries are always adding key waypoints to the chart of our human journey. This timeline of Homo sapiens features some of the best evidence documenting how we evolved.

550,000 to 750,000 years ago: The beginning of the Homo sapiens lineage

A facial reconstruction of Homo heidelbergensis, a popular candidate as a common ancestor for modern humans, Neanderthals and Denisovans. Credit: John Gurche

Genes, rather than fossils, can help us chart the migrations, movements and evolution of our own species—and those we descended from or interbred with over the ages.

The oldest-recovered DNA of an early human relative comes from Sima de los Huesos, the “Pit of Bones.” At the bottom of a cave in Spain’s Atapuerca Mountains scientists found thousands of teeth and bones from 28 different individuals who somehow ended up collected en masse. In 2016, scientists painstakingly teased out the partial genome from these 430,000-year-old remains to reveal that the humans in the pit are the oldest known Neanderthals, our very successful and most familiar close relatives. Scientists used the molecular clock to estimate how long it took to accumulate the differences between this oldest Neanderthal genome and that of modern humans, and the researchers suggest that a common ancestor lived sometime between 550,000 and 750,000 years ago.

Pinpoint dating isn’t the strength of genetic analyses, as the 200,000-year margin of error shows. “In general, estimating ages with genetics is imprecise,” says Joshua Akey, who studies evolution of the human genome at Princeton University. “Genetics is really good at telling us qualitative things about the order of events, and relative time frames.” Before genetics, these divergence dates were estimated by the oldest fossils of various lineages scientists found. In the case of H. sapiens, known remains only date back some 300,000 years, so gene studies have located the divergence far more accurately on our evolutionary timeline than bones alone ever could.

Though our genes clearly show that modern humans, Neanderthals and Denisovans—a mysterious hominin species that left behind substantial traces in our DNA but, so far, only a handful of tooth and bone remains—do share a common ancestor, it’s not apparent who it was. Homo heidelbergensis, a species that existed from 200,000 to 700,000 years ago, is a popular candidate. It appears that the African family tree of this species leads to Homo sapiens while a European branch leads to Homo neanderthalensis and the Denisovans.

More ancient DNA could help provide a clearer picture, but finding it is no sure bet. Unfortunately, the cold, dry and stable conditions best for long-term preservation aren’t common in Africa, and few ancient African human genomes have been sequenced that are older than 10,000 years.

“We currently have no ancient DNA from Africa that even comes near the timeframes of our evolution—a process that is likely to have largely taken place between 800,000 and 300,000 years ago,” says Eleanor Scerri, an archaeological scientist at the Max Planck Institute for the Science of Human History in Germany.

300,000 years ago: Fossils found of oldest Homo sapiens

Two views of a composite reconstruction of the earliest known Homo sapiens fossils from Jebel Irhoud. Credit: Philipp Gunz/MPI EVA Leipzig

As the physical remains of actual ancient people, fossils tell us most about what they were like in life. But bones or teeth are still subject to a significant amount of interpretation. While human remains can survive after hundreds of thousands of years, scientists can’t always make sense of the wide range of morphological features they see to definitively classify the remains as Homo sapiens, or as different species of human relatives.

Fossils often boast a mixture of modern and primitive features, and those don’t evolve uniformly toward our modern anatomy. Instead, certain features seem to change in different places and times, suggesting separate clusters of anatomical evolution would have produced quite different looking people.

No scientists suggest that Homo sapiens first lived in what’s now Morocco, because so much early evidence for our species has been found in both South Africa and East Africa. But fragments of 300,000-year-old skulls, jaws, teeth and other fossils found at Jebel Irhoud, a rich site also home to advanced stone tools, are the oldest Homo sapiens remains yet found.

The remains of five individuals at Jebel Irhoud exhibit traits of a face that looks compellingly modern, mixed with other traits like an elongated brain case reminiscent of more archaic humans. The remains’ presence in the northwestern corner of Africa isn’t evidence of our origin point, but rather of how widely spread humans were across Africa even at this early date.

Other very old fossils often classified as early Homo sapiens come from Florisbad, South Africa (around 260,000 years old), and the Kibish Formation along Ethiopia’s Omo River (around 195,000 years old).

The 160,000-year-old skulls of two adults and a child at Herto, Ethiopia, were classified as the subspecies Homo sapiens idaltu because of slight morphological differences including larger size. But they are otherwise so similar to modern humans that some argue they aren’t a subspecies at all. A skull discovered at Ngaloba, Tanzania, also considered Homo sapiens, represents a 120,000-year-old individual with a mix of archaic traits and more modern aspects like smaller facial features and a further reduced brow.

Debate over the definition of which fossil remains represent modern humans, given these disparities, is common among experts. So much so that some seek to simplify the characterization by considering them part of a single, diverse group.

“The fact of the matter is that all fossils before about 40,000 to 100,000 years ago contain different combinations of so called archaic and modern features. It’s therefore impossible to pick and choose which of the older fossils are members of our lineage or evolutionary dead ends,” Scerri suggests. “The best model is currently one in which they are all early Homo sapiens, as their material culture also indicates.”

As Scerri references, African material culture shows a widespread shift some 300,000 years ago from clunky, handheld stone tools to the more refined blades and projectile points known as Middle Stone Age toolkits.

So when did fossils finally first show fully modern humans with all representative features? It’s not an easy answer. One skull (but only one of several) from Omo Kibish looks much like a modern human at 195,000 years old, while another found in Nigeria’s Iwo Eleru cave, appears very archaic, but is only 13,000 years old. These discrepancies illustrate that the process wasn’t linear, reaching some single point after which all people were modern humans.

300,000 years ago: Artifacts show a revolution in tools

The two objects on the right are pigments used between 320,000 and 500,000 years ago in East Africa. All other objects are stone tools used during the same time period in the same area. Credit: Human Origins Program/NMNH/Smithsonian Institution

Our ancestors used stone tools as long as 3.3 million years ago and by 1.75 million years ago they’d adopted the Acheulean culture, a suite of chunky handaxes and other cutting implements that remained in vogue for nearly 1.5 million years. As recently as 400,000 years ago, thrusting spears used during the hunt of large prey in what is now Germany were state of the art. But they could only be used up close, an obvious and sometimes dangerous limitation.

Even as they acquired the more modern anatomy seen in living humans, the ways our ancestors lived, and the tools they created, changed as well.

Humans took a leap in tool tech with the Middle Stone Age some 300,000 years ago by making those finely crafted tools with flaked points and attaching them to handles and spear shafts to greatly improve hunting prowess. Projectile points like those Potts and colleagues dated to 298,000 to 320,000 years old in southern Kenya were an innovation that suddenly made it possible to kill all manner of elusive or dangerous prey. “It ultimately changed how these earliest sapiens interacted with their ecosystems, and with other people,” says Potts.

Scrapers and awls, which could be used to work animal hides for clothing and to shave wood and other materials, appeared around this time. By at least 90,000 years ago barbed points made of bone—like those discovered at Katanda, Democratic Republic of the Congo—were used to spearfish

As with fossils, tool advancements appear in different places and times, suggesting that distinct groups of people evolved, and possibly later shared, these tool technologies. Those groups may include other humans who are not part of our own lineage.

Last year a collection including sophisticated stone blades was discovered near Chennai, India, and dated to at least 250,000 years ago. The presence of this toolkit in India so soon after modern humans appeared in Africa suggests that other species may have also invented them independently—or that some modern humans spread the technology by leaving Africa earlier than most current thinking suggests.

100,000 to 210,000 years ago: Fossils show Homo sapiens lived outside of Africa

A skull found in Qafzeh, from the collection at the American Museum of Natural History. Credit: Wapondaponda/Wikipedia

Many genetic analyses tracing our roots back to Africa make it clear that Homo sapiens originated on that continent. But it appears that we had a tendency to wander from a much earlier era than scientists had previously suspected.

jawbone found inside a collapsed cave on the slopes of Mount Carmel, Israel, reveals that modern humans dwelt there, alongside the Mediterranean, some 177,000 to 194,000 years ago. Not only are the jaw and teeth from Misliya Cave unambiguously similar to those seen in modern humans, they were found with sophisticated handaxes and flint tools.

Other finds in the region, including multiple individuals at Qafzeh, Israel, are dated later. They range from 100,000 to 130,000 years ago, suggesting a long presence for humans in the region. At Qafzeh, human remains were found with pieces of red ocher and ocher-stained tools in a site that has been interpreted as the oldest intentional human burial.

Among the limestone cave systems of southern China, more evidence has turned up from between 80,000 and 120,000 years ago. A 100,000-year-old jawbone, complete with a pair of teeth, from Zhirendong retains some archaic traits like a less prominent chin, but otherwise appears so modern that it may represent Homo sapiens. A cave at Daoxian yielded a surprising array of ancient teeth, barely distinguishable from our own, which suggest that Homo sapiens groups were already living very far from Africa from 80,000 to 120,000 years ago.

Even earlier migrations are possible; some believe evidence exists of humans reaching Europe as long as 210,000 years ago. While most early human finds spark some scholarly debate, few reach the level of the Apidima skull fragment, in southern Greece, which may be more than 200,000 years old and might possibly represent the earliest modern human fossil discovered outside of Africa. The site is steeped in controversy, however, with some scholars believing that the badly preserved remains look less those of our own species and more like Neanderthals, whose remains are found just a few feet away in the same cave. Others question the accuracy of the dating analysis undertaken at the site, which is tricky because the fossils have long since fallen out of the geological layers in which they were deposited.

While various groups of humans lived outside of Africa during this era, ultimately, they aren’t part of our own evolutionary story. Genetics can reveal which groups of people were our distant ancestors and which had descendants who eventually died out.

“Of course, there could be multiple out of Africa dispersals,” says Akey. “The question is whether they contributed ancestry to present day individuals and we can say pretty definitely now that they did not.”

50,000 to 60,000 years ago: Genes and climate reconstructions show a migration out of Africa

A digital rendering of a satellite view of the Arabian Peninsula, where humans are believed to have migrated from Africa roughly 55,000 years ago. Credit: Przemek Pietrak/Wikipedia

All living non-Africans, from Europeans to Australia’s aboriginal people, can trace most of their ancestry to humans who were part of a landmark migration out of Africa beginning some 50,000 to 60,000 years ago, according to numerous genetic studies published in recent years. Reconstructions of climate suggest that lower sea levels created several advantageous periods for humans to leave Africa for the Arabian Peninsula and the Middle East, including one about 55,000 years ago.

“Just by looking at DNA from present day individuals we’ve been able to infer a pretty good outline of human history,” Akey says. “A group dispersed out of Africa maybe 50 to 60 thousand years ago, and then that group traveled around the world and eventually made it to all habitable places of the world.”

While earlier African emigres to the Middle East or China may have interbred with some of the more archaic hominids still living at that time, their lineage appears to have faded out or been overwhelmed by the later migration.

15,000 to 40,000 years ago: Genetics and fossils show Homo sapiens became the only surviving human species

A facial reconstruction of Homo floresiensis, a diminutive early human that may have lived until 50,000 years ago. Credit: John Gurche

For most of our history on this planet, Homo sapiens have not been the only humans. We coexisted, and as our genes make clear frequently interbred with various hominin species, including some we haven’t yet identified. But they dropped off, one by one, leaving our own species to represent all humanity. On an evolutionary timescale, some of these species vanished only recently.

On the Indonesian island of Flores, fossils evidence a curious and diminutive early human species nicknamed “hobbit.” Homo floresiensis appear to have been living until perhaps 50,000 years ago, but what happened to them is a mystery. They don’t appear to have any close relation to modern humans including the Rampasasa pygmy group, which lives in the same region today.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Neanderthals once stretched across Eurasia from Portugal and the British Isles to Siberia. As Homo sapiens became more prevalent across these areas the Neanderthals faded in their turn, being generally consigned to history by some 40,000 years ago. Some evidence suggests that a few die-hards might have held on in enclaves, like Gibraltar, until perhaps 29,000 years ago. Even today traces of them remain because modern humans carry Neanderthal DNA in their genome.

Our more mysterious cousins, the Denisovans, left behind so few identifiable fossils that scientists aren’t exactly sure what they looked like, or if they might have been more than one species. A recent study of human genomes in Papua New Guinea suggests that humans may have lived with and interbred with Denisovans there as recently as 15,000 years ago, though the claims are controversial. Their genetic legacy is more certain. Many living Asian people inherited perhaps 3 to 5 percent of their DNA from the Denisovans.

Despite the bits of genetic ancestry they contributed to living people, all of our close relatives eventually died out, leaving Homo sapiens as the only human species. Their extinctions add one more intriguing, perhaps unanswerable question to the story of our evolution—why were we the only humans to survive?

Brian Handwerk is a freelance writer based in Amherst, New Hampshire. Find Brian on Twitter @HandwerkBrian

A version of this article was originally posted at Smithsonian and has been reposted here with permission. Smithsonian can be found on Twitter @Smithsonian

Evoking Jeff Goldblum’s ‘The Fly’: Does growing human ‘brains-in-a-dish’ and creating chimeras cross a bioethical line?

Bits of human brain growing in a lab dish can reveal a great deal about how a disease begins and unfolds. But because the brain is also the seat of our consciousness and individuality, does disembodying it for science’s sake pose bioethical challenges?

That’s what Stanford Law School researcher Henry (Hank) T. Greely addressed in “Human Brain Surrogates Research: The Onrushing Ethical Dilemma” in the January issue of The American Journal of Bioethics. Will the technology reach a stage at which the brain model is perhaps too close a mimic for comfort?

“If it looks like a human brain and acts like a human brain, at what point do we have to treat it like a human brain — or a human being?” he asks.

Hank Greely. Credit: Stanford

The issue is front and center now that researchers from Stanford University and the University of California-Los Angeles have conducted extensive genetic analyses of so-called ‘organoids‘ which were grown in experimental dishes for up to 20 months. They found that the artificial brains appear to grow in phases accordance with an internal clock — one that matches the development of real infant brains.

The findings suggest that organoids are able to develop beyond a ‘fetal’ stage, contrary to what had previously been assumed. Brain organoids might well be able to be matured to such an extent that they can be used by scientists to investigate dementia and other adult-onset diseases.

A slice of human cerebral cortex, its cells communicating with each other, somehow seems different than growing part of a spleen or bone. Both the National Institutes of Health NIH Brain Initiative and the National Academics of Sciences, Engineering, and Medicine’s Ethical, Legal and Regulatory Issues Associated with Neural Chimeras and Organoids report tackle the issues that human brain surrogates raise.

Diverse applications of human brain organoids

“Human brain surrogates” take several forms. Most realistic is an organoid.

An organoid grows and develops from cells that specialize into tissues that unfurl, aggregate and fold into organs, following the same instructions in their genomes as they would in their natural space, a body. Especially for an organ as complex as a brain, an organoid better approximates a body part than older technologies, such as cell culture, animal models like mice and chimps, or synthetic stand-ins.

Organoids are coaxed to grow from stem cells bathed in cocktails of protein factors that promote growth and specialization. Growing a brain part is particularly challenging because nerve cells normally don’t divide — that’s where the origin as stem cells comes in.

lab grown brain organoids mature like infant brains
Cross-section of an entire organoid from a an unrelated study. Creditt: Vienna BioCenter

Genetic Literacy Project covered several types of organoids in 2018: mini-kidneys, bladder balls, tiny tubes of esophagus, the velvety interior lining of a small intestine, eyes and even a Neanderthal’s “mini-brain” that was described at a meeting in 2018 and just recently made headlines with a report in Science.

Human brain organoids are eclectic, shedding light on single-gene diseases such as Rett syndrome, developmental problems, mental illness, infectious disease, and even evolution. They helped reveal that SARS-CoV-2, the virus that causes COVID-19, enters brain cells.

Four ways to mimic or model a human brain

Greely describes four avenues of research that recapitulate a human brain part: humanized animals, brain chimeras, organoids, and actual brain matter living outside bodies. All begin with cells.

Humanized animals, such as rodents and pigs with a human gene stitched into their genomes, have been around for decades. The Jackson Lab in Bar Harbor, Maine, develops and provides much of the world’s humanized mice to researchers — their catalog includes mice that harbor human genes for Alzheimer’s disease, ALS, Down syndrome, Huntington disease, Parkinson’s disease, and many more.

What’s new is the use of CRISPR to more easily and precisely humanize mice and other creatures. While conventional transgenic mice have human genes added, CRISPR can replace a gene. CRISPR used on our closest relative — monkeys — provides a powerful model of humanity.

Like humanized mice, the second category of brain mimics, chimeras, also mix components from different species. But a chimera develops from a mixture of cells, not genes.

Chimeras evoke fiction, from the origin in Greek mythology of a lion with a goat head growing from its back and a tail ending in a snake’s head, to Jeff Goldblum’s body accidentally mixing with the body of an insect in the film version of the short story “The Fly.

Jeff Goldblum as the Fly. Credit: David Cronenberg

the metamorphosis kafka

It evokes memories of the opening line of Franz Kafka’s classic novella The Metamorphosis: “As Gregor Samsa awoke one morning from uneasy dreams, he found himself transformed in his bed into a gigantic insect.” In Sleeper, comedian Woody Allen described a chimera that has “the body of a crab and the head of social worker.”

Greely recounts society’s reaction to the “ick” factor of mixing species

The ancient Hebrews would have been concerned. The Torah reads, “You shall not sow your field with two kinds of seed” (Leviticus 19:19),  and also frowns upon breeding mules. The issue arose more recently in the context of a Jewish patient being allowed to receive a heart valve from a pig. Currently, such ‘mixing’ of parts from different species is not legally prohibited. Laws forbidding the practice, like the Human-Animal Chimera Prohibition Act of 2016, haven’t passed. But in general, Greeley writes, mixing of parts is ethically questionable, and even worse if nerve cells are involved.

halloween body parts gummy candy a
Gummy organoids

But there is a third category, according to Greely. Unlike chimeras, human brain organoids, are fully human. They aren’t exact replicas like those tiny toy spongy organs that you put in a filled sink and watch expand. Instead, the initial stem cells, after taking a few developmental steps to specialize as neural stem cells, circularize into shapes resembling soccer balls.

These neurospheres grow to about the size of tiny peas and consist of a few million neurons each, new cells constantly coming from the stem cell source. That’s compared to the 86 billion or so neurons of a human brain in its normal environment, the cranium.

Then the cells of the neurospheres specialize and interact, building tiny structures that resemble distinctive brain parts — cerebral cortex or hippocampus, for example — and even demonstrate neural activity, like responding to light. Human brain organoids can also be grown inside mouse heads, where neurons from the two species connect.

Greely’s fourth category of human brain surrogates is living “ex vivo” human brain tissues — meaning “outside the body.”

Ex vivo brain matter comes from corpses or surgical waste, not from stimulating stem cells to bloom into organs. But it’s technically challenging to keep the delicate cells of an actual organ functioning outside a body. In 2019 Yale researchers kept a pig brain alive for 4 hours after slaughter, soaking it in a bloody, sugary soup. The cells metabolized and neurons fired off electrical impulses. So, the pig brain had some signs of life, but not of thinking or consciousness.

Alas, the technology can’t yet help Ted Williams. The baseball hall-of-famer died at age 83 in 2002 and had his head frozen at a Scottsdale cryonics facility, thawing planned for when the technology of awakening consciousness had been invented. That’s still a long way off.

Are bioethical concerns appropriate?

Do disembodied collections of nerve cells deserve the same rights and considerations as a brain that is alive in a person’s body? Does a particular pattern of neurons firing in a smear of tissue in a lab dish indicate pain or anxiety?

Greely thinks not. “In a vat, no one can hear you scream — especially if you don’t have the lungs, vocal cords, and mouth to form a scream,” he writes, borrowing the tagline of the film Alien.

Do the donors of cells that take on new life outside the body have rights that need protecting? The people who provide the skin cells that beget the stem cells that seed a brain organoid? Whose Alzheimer’s mutation ends up in a monkey’s brain? Whose neurons from a removed brain tumor show signs of life for a few extra weeks, zapping each other in a dish? Are these people “human subjects” in the same way that a college student participating in a psychology experiment or a person in a vaccine clinical trial is?

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

The bigger concerns is what would we do if a non-human animal given bits of human brain starts to show higher-level brain function? Perhaps an unusually self-aware rodent like Mickey or Fievel? Was it a warning that genetic manipulation endowed Caesar, the intelligent and emotional monkey in the Rise of the Planet of the Apes, with his humanity? All fiction, but still. There’s something inherently disturbing about tinkering with the brain.

So far, the state of the science appears to present no serious cause for alarm, but bioethicists warn of the slippery slope that we can’t always envision. Intense human emotions or recapitulating a Neanderthal are unlikely to happen in a lab dish from CRISPRing a single gene or even a few genes, nor from a bunch of cells shocking each other in a piece of glassware.

Those real examples are far from Caesar the humanized monkey or Jeff Goldblum’s merging with a fly, but they highlight our discomfort, if not outright revulsion, with the thought of crossing species barriers in mixing brain material. A line from another Jeff Goldblum film rings true. Said Jurassic Park mathematician Ian Malcolm,

Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.

Ricki Lewis has a PhD in genetics and is a science writer and author of several human genetics books. She is an adjunct professor for the Alden March Bioethics Institute at Albany Medical College. Follow her at her website or Twitter @rickilewis

Coffee reduces risk of heart failure? What are we to make of a new study based on artificial intelligence (AI)

When I was starting out in epidemiology in the early 1980’s I attended a lecture by Thomas Pearson, a cardiologist, on coffee and heart disease. His opening slide immediately caught the audience’s attention. It showed whitecollar commuters, dressed for work and attaché cases in hand, jumping over hurdles in their morning race to the office. The clear message was that in male, Type A personalities caffeine intake was leading to heart attacks.

This distant memory came back to me after reading a recent story in the New York Times about a new study using artificial intelligence (AI) that indicated that coffee consumption may actually protect against heart failure (HF), one of a number of conditions that fall under the rubric of cardiovascular disease.

The article has received an unusual amount of attention both from scientists and the media.

Rather than starting from a specific hypothesis, the researchers used machine learning to find meaningful patterns in three large prospective studies, including the Framingham Heart Study, a cohort study that began in 1948, and two others. Participants in the studies were followed for decades and information on clinical factors and behavioral factors was obtained at multiple time points.

This approach eliminated the subjective element of researchers selecting which variables to include in their analysis. From among hundreds of clinical and behavioral factors, the machine learning program selected factors in the top 20 percent (41 factors) showing the highest correlation with coronary heart disease. These included smoking, marital status, red meat consumption, whole milk consumption, and coffee consumption.

The analysis showed that people who reported higher coffee intake had reduced long-term risk of heart failure (HF). This pattern was visible in all three cohorts. Compared to non-drinkers of coffee, drinkers of 1 cup per day showed no difference in risk, whereas drinkers of 2 and 3 or more cups per day had a 31 percent and a 29 percent reduction in risk, respectively.

Surprisingly, in this analysis, coffee drinking was comparable to known risk factors for HF, including age, blood pressure, heart rate, and weight.

There was no association with intake of decaffeinated coffee, and this led the authors to speculate that the coffee result might reflect the action of caffeine.

What are we to make of these results?

First, the data on coffee intake, which are sparse, showed no indication of a dose-response relationship between increasing intake of coffee and reduced risk of HF, which would have suggested a possible causal relationship.

Furthermore, as noted by the authors, the non-dietary and clinical factors in the top 20% were correlated with known risk factors. This means that the apparent reduction in risk attributed to coffee could be the result of other factors that co-vary with coffee drinking.

For example, coffee intake was positively associated with current smoking and inversely associated with hypertension and diabetes. This points up the difficulty of isolating a single factor from among the many components of diet and behavior generally and understanding its association with HF, or heart disease generally, which is the classic multi-factorial disease.

The report in the Times quoted a scientist not involved with the work, a cardiologist at the Yale School of Medicine, Dr. Harlan Krumholz, who “called the approach innovative but noted one limitation was that many other behaviors likely track with coffee consumption, and it is difficult to disentangle the specific effect of coffee from other things that may go along with it.’”

It appears that what has drawn so much media attention to this study was its use of AI, rather than any compelling results. Although the results of this study are, as the authors and an accompanying editorial make clear, difficult to interpret, AI holds the potential in the future for integrating information from sensors monitoring physical activity and physiology as well as from photographs of meals consumed to improve the quality of nutritional research, which has long been dependent on self-reports (food-frequency questionnaires and dietary recalls), which are known to be biased.

Looking back at over forty years of research on coffee and health, one notes a striking shift from viewing coffee as something harmful to viewing it as providing a potential health benefit.

In the study published by Thomas Pearson and colleagues in 1986, after controlling for such factors as age, smoking, hypertension, and serum cholesterol, young men drinking five or more cups of coffee per day, compared to those drinking no coffee had a two-and-a-half-fold increased risk of coronary heart disease.

However, a 1992 meta-analysis of 11 prospective studies found no association of drinking from one to more than 6 cups of coffee per day relative to drinking little or no coffee.

In the 1980s, other epidemiologists also reported that coffee drinking increased the risk of cancer of the bladder and pancreas. Based on these early studies, in 1991 the International Agency for Research on Cancer (IARC) classified coffee drinking as a “possible carcinogen,” causing bladder cancer. (In 2016 IARC revised its conclusion, stating that “there was not conclusive evidence for a carcinogenic effect”).

In recent years the results of large, prospective studies from many different countries have shown a striking pattern of coffee drinking being associated with a reduced risk cardiovascular disease, a number of different types of cancer, and a variety of other conditions.

Regarding cancer, meta-analyses of studies have shown an “inverse association” – suggesting that coffee drinking is protective — with a number of cancers, including those of the liver, endometrium, melanoma, and oral and pharyngeal cancer. Postmenopausal breast cancer, colorectal, and prostate cancer also showed inverse associations in some meta-analyses; however, the results were less consistent for these cancers.

No association — that is, no increase or decrease in risk with coffee consumption was seen with cancers of the ovary, pancreas, stomach, bladder, kidney, or thyroid in recent studies.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Coffee drinking has also been found to be inversely associated with all-cause mortality.

Thus, coffee appears to be either something that possesses beneficial activity (possibly due to phenolic compounds with anti-oxidant activity) or to be correlated with other beneficial exposures or behaviors.

What accounts for the “change of sign” in studies of coffee drinking and health from the 80s and 90s to the present? How has coffee gone from being a potential threat to being seen as a component of a healthy diet?

One possible explanation is that we have many more large prospective datasets from many different countries today than were available thirty years ago. The size of these studies has made it possible to do a better job of adjusting for confounding variables and correcting for biases. The number of meta-analyses examining coffee and different diseases has surged in the past ten years.

And yet, forty years of studies of the health effects of coffee drinking testifies to the difficulty of isolating a single component of a complex and varied diet, against the background of other behaviors, and linking it conclusively to health or disease.

Considering the possibilities of using AI to integrate a variety of data, including geolocation, input from sensors, and participant-driven food and beverage photographs, the author of the editorial commented that these innovations could be “transformational.”

Geoffrey Kabat is a cancer epidemiologist and the author of Getting Risk Right: Understanding the Science of Elusive Health Risks. Find Geoffrey on Twitter @GeoKabat

Catching COVID from food: A year’s worth of research dispels panic

When the COVID-19 pandemic began, not much was known about SARS-CoV-2 (the coronavirus) and its survival in food, on various materials and on surfaces. Since then, several food safety agencies have assessed the risk of potentially acquiring the virus from contaminated food or food packaging. The consensus is that currently, there’s no evidence it’s a food safety risk.

The main route of infection is from person-to-person via contact with one another, respiratory droplets and aerosols from coughing, sneezing and talking. Therefore, it’s not considered a foodborne virus.

We surveyed the scientific literature to see what it said about the safety of food and SARS-CoV-2. This included the survival of the virus, how it’s transmitted and how it can be inactivated in food and on surfaces.

Overall, the evidence suggests that the virus is not a risk to food safety. But it has caused disruptions to the global food supply chain.

One research question was whether the virus is transmitted via the faecal-oral route. The question arose because a study had found viral genetic material in anal swabs and blood taken from patients. This was an important point because one of the symptoms of COVID-19 is diarrhoea. However, there are no reports to date showing faecal-oral transmission of the virus.

Furthermore, several studies have concluded that diarrhoea in COVID-19 patients isn’t likely to occur from ingesting contaminated food. Rather, it’s from the pathway of the virus, from the respiratory system to the digestive tract.

Where the coronavirus survives

Viruses tend to survive well at low temperatures. Freezing can actually preserve them. So it’s likely that SARS-CoV-2 would survive freezing of food. But several studies have indicated that this virus and similar ones are inactivated by cooking food at frequently-used temperatures.

The coronavirus appears to be stable at different pH values (3–10) at room temperature. More alkaline and more acidic conditions beyond this range appear to inactivate the virus. This means it’s unlikely to survive the acidic environment of the stomach.

It’s also likely that the virus in food will be at low concentrations. Importantly, the coronavirus, like other viruses, cannot multiply outside of their hosts. Therefore, it cannot multipy in food.

Credit: GQ

It’s well-established that viruses causing respiratory infections can be transmitted by indirect contact through the environment. This happens when a person touches contaminated surfaces and then touches their mouth, nose or eyes, without first washing their hands.

Various experimental studies on the survival of the coronavirus on different types of surfaces under different conditions have been conducted. The virus was found to survive on different surfaces for different periods of time, depending on environmental conditions and initial viral load.

Nevertheless, one must be aware that survival may be different to these studies, in a more realistic setting, outside the laboratory. The US Centers for Disease Control and Prevention and other similar agencies and organisations don’t consider contaminated surfaces a main route of transmission of SARS-CoV-2.

Current consensus is therefore that SARS-CoV-2 is not transmitted by food and is highly unlikely to be transmitted by food packaging material, but it could be spread by touching contaminated surfaces and then touching your mouth, nose or eyes. It’s therefore very important to properly clean and disinfect food contact surfaces and especially high-touch surfaces and utensils in a food environment.

Disinfection and prevention

SARS-CoV-2 belongs to the coronavirus family of enveloped viruses, which makes them susceptible to detergents and a variety of other microbicides, even more so than fungi, vegetative bacteria and yeasts.

Studies have shown that the fatty layer surrounding the virus is disrupted, leading to inactivation of the virus when using 0.1% sodium hypochlorite (diluted household bleach), 0.5% hydrogen peroxide and 62%–71% ethanol. These solutions all significantly reduce SARS-CoV-2 on surfaces, after one minute of exposure.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Several agencies have published a list of approved disinfectants for use against SARS-CoV-2 in industrial settings, namely the United States Environmental Protection AgencyHealth Canada and the European Union.

In conclusion, the greatest risk related to COVID-19 remains person-to-person transmission and aerosolised transfer in the food environment, including manufacturing, retail and food service. In fact, there have been several person-to-person COVID-19 outbreaks among farm workers and in food processing establishments.

The COVID-19 pandemic has caused major disruptions to the global food supply chain. Credit: Dusan Petkovic/Shutterstock

This is why it’s important to adhere to proper hygienic measures by wearing appropriate personal protective equipment (such as masks) and practising proper hand hygiene and physical distancing. Food companies – like any others – need to ensure that their employees are vigilant about mask-wearing, hand-washing, maintaining a physical distance and regular cleaning and disinfection of high-touch surfaces and utensils.

In summary, the discovery of SARS-CoV-2 on food or food packaging may raise concerns about food safety, but it doesn’t indicate a risk for public health. Therefore it should not be a basis for restricting food trade or initiating a food recall. Thinking about the food supply chain in a connected way – integrating health, food security and sustainability – will be an important part of controlling any future pandemics.

Lucia Anelich is first author of this paper, and an Adjunct Professor at the Central University of Technology. Lucia is the owner of her own food safety consulting and training business, Anelich Consulting. Find Lucia on Twitter @AnelichConsult1

Jeffrey M. Farber is the head of the Master’s Program in Food Safety and Quality Assurance at the University of Guelph and Director of the Canadian Research Institute for Food Safety. Find Jeffrey on Twitter @drjmfarber

Ryk Lues holds the positions of Professor of Food Safety and Director of the Centre for Applied Food Sustainability and Biotechnology (CAFSaB) at the Central University of Technology, Free State, South Africa. 

Valeria R. Parreira is a Research Manager in the Canadian Research Institute for Food Safety (CRIFS) and an Adjunct Faculty in Pathobiology at University of Guelph. Find Valeria on Twitter @vpa6

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

Mosquito massacre: Can we safely tackle malaria with a CRISPR gene drive?

CRISPR-Cas9 gene editing quickly decimated two caged populations of malaria-bearing mosquitoes (Anopheles gambiae) in a recent study, introducing a new way to solve an age-old problem. But the paper describing the feat in Nature Biotechnology had a broader meaning regarding the value of basic research. It also prompts us to consider the risks and rewards of releasing such a powerful gene drive into the wild.

Instead of altering a gene affecting production of a reproductive hormone, the editing has a more fundamental target: a gene that determines sex. The work was done by Andrea Crisanti and colleagues at Imperial College London. Their clever use of the ancient insect mutation doublesex rang a bell for me — I’d used a fruit fly version in grad school.

Blast from the past

In the days before genome sequencing, geneticists made mutants in model organisms like fruit flies to discover gene functions. I worked on mutations that mix up body parts.

drosophila mating x

To make mutants, I’d poison larvae or schlep them, squiggling through the goop in their old-fashioned milk bottles, from the lab at Indiana University in Bloomington to the children’s cancer center in Indianapolis and zap them with x-rays. Crossing the grown-up larvae to flies that carried already-known mutations would reveal whether we’d induced anything of interest in their offspring. One of the mutations we used in these genetic screens was doublesex.

A suite of genes determines sex in insects, not just inheriting an X or Y chromosome. Doublesex acts at a developmental crossroads to select the pathway towards femaleness or maleness. When the gene is missing or mutant, flies display a mishmash of sexual parts and altered behavior. Males with doublesex mutations “are impaired in their willingness to court females,” according to one study, and when they do seek sex, they can’t hum appropriately and “court other males at abnormally high levels.”

Back then, we used doublesex as a tool to identify new mutations. We never imagined it being used to prevent an infectious disease that causes nearly half a million deaths a year, mostly among young children.

A gene drive skews inheritance, destroying fertility

In grad school, we bred flies for many generations to select a trait, because a mutation in a carrier passes to only half the offspring. A gene drive speeds things by messing with Mendel’s first law, which says that at each generation, each member of a pair of gene variants (alleles) gets sent into a sperm or egg with equal frequency.

Austin Burt, a co-author of the new paper, introduced the idea of a gene drive in 2003, pre-CRISPR. The intervention uses a version of natural DNA repair that snips out one copy of a gene and replaces it with a copy of whatever corresponding allele is on the paired chromosome. Imagine dance partners, removing one, and inserting an identical twin of the other.

In the language of genetics, a gene drive can turn a heterozygote (2 different copies of a gene) into a homozygote (2 identical copies).

Gene Editing

In 2014, Kevin Esvelt, George Church, and their colleagues at Harvard suggested how to use CRISPR-Cas9 gene editing to speed a gene drive. It made so much sense that in 2016, the National Academies of Sciences, Engineering, and Medicine issued a report urging caution while endorsing continued laboratory experimentation and limited field trials of gene drives.

The idea to genetically cripple mosquito reproduction isn’t new. But a CRISPRed gene drive to do so would be fast, leading to mass sterility within a few generations, with the population plummeting towards extinction. And doublesex is an inspired target. It’s so vital that only one variant in Anopheles gambiae is known in the wild — any other mutations so impair the animals that they and their genes don’t persist. That’s why the gene can’t mutate itself back into working, like bacteria developing antibiotic resistance. For doublesex, resistance is futile.

Harnessing doublesex

The doublesex gene consists of 7 protein-encoding exons and the introns that separate them. The gene is alternatively spliced: mosquitoes keeping exon 5 become females and those that jettison the exon develop as males.

The researchers injected mosquito embryos with CRISPR-Cas9 engineered to harpoon the boundary between intron 4 and exon 5 of the doublesex gene. They added genetic instructions for red fluorescent protein on the Y chromosome to mark male gonads, so the researchers could distinguish the sexes.

The modified female mosquitoes were weird. They sported male clasper organs rotated the wrong way and lacked parts of the female sex organ repertoire. They had feathery male-like “plumose antennae,” neither ovaries nor female sperm holders, yet male accessory glands and in some individuals “rudimentary pear-shaped organs resembling unstructured testes.” Most importantly, the doctored females couldn’t bite or suck up blood meals.

Malaria parasites infect two blood cells. Credit: Lennart Nilsson / Scanpix

The researchers set up two cages, each housing 300 females with normal doublesex genes, 150 normal males, and 150 males that had one copy of normal doublesex and one modified copy, called CRISPRh. Then the insects mated. (For a scintillating description of fly sex see A Fruit Fly Love Story: The Making of a Mutant.)

Within 7 generations in one cage and 11 in the other, all the female flies had CRISPRh and couldn’t mate. Because males with one copy of CRISPRh are fertile, the populations chugged along until the gene drive rendered all the females homozygous. With two copies of the modified doublesex gene, they couldn’t eat or mate.

Next steps

Gene editing of doublesex presents a question of balance. The investigators dub it “an Achilles heel” common to many insect species, yet at the same time, the DNA sequences are species-specific enough to not spread to other types of insects. A gene drive that kills off bees or aphids, for example, would be disastrous.

Next will come experiments in “large confined spaces” more like nature. Cooped up, mosquitoes don’t have much to do besides breed. In a more natural setting, they’d have to compete for resources and mates, confront changing conditions, and avoid being eaten. But computer simulations suggest that adding these stresses would only slightly slow spread of the gene drive.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Field tests are 5 to 10 years in the future, the researchers say. Dr. Burt estimates that releasing a few hundred doctored mosquitoes at a time, into selected African villages, might knock down populations sufficiently to wipe them out, even over a wider range. Local eradication of malaria would take about 15 years once a gene drive begins, he projects.

12-18-2017 Jurassic_Park_Museo_nazionale_del_cinema-300x232

Will nature find a way around gene drives?

What about “unforeseen consequences” of unleashing a gene drive to vanquish malaria-bearing mosquitoes? To quote fictional mathematician Ian Malcolm in discussing the cloned dinosaurs of Jurassic Park, “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

We’re past the “could” stage with a doublesex-mediated gene drive against the mosquitoes. But perhaps we shouldn’t ignore the history of biotechnology. Even though no superbugs or triple-headed purple monsters have escaped from recombinant DNA labs since self-policing began at the Asilomar meeting in 1975, pollen from genetically modified crops has wafted well beyond treated fields. Sometimes, as Dr. Malcolm said, “life, uh, finds a way.”

Yet the severity and persistence of malaria may justify the risk of unforeseen consequences in developing a gene drive.

About 216 million malaria cases occurred globally in 2016, with an estimated 445,000 deaths, according to the WHO’s World Malaria Report 2017, which states that “after an unprecedented period of success in global malaria control, progress has stalled.” Said Dr. Crisanti, “2016 marked the first time in over two decades that malaria cases did not fall despite huge efforts and resources, suggesting we need more tools in the fight. This breakthrough shows that a gene drive can work, providing hope in the fight against a disease that has plagued mankind for centuries.”

Just like recombinant DNA entered the clinic in 1982 with FDA approval of human insulin produced in bacteria, the first gene drive, whatever it may deliver, could open the door for many others, just as dozens of drugs are now based on combining genes of different species. Doublesex, the mutation that I used in graduate school to screen new mutations, is one gene of thousands in just that one species. If and when gene drives are validated, the possibilities to limit or eradicate infectious diseases are almost limitless, thanks to the genetic toolboxes provided from decades of basic research.

Ricki Lewis has a PhD in genetics and is a science writer and author of several human genetics books. She is an adjunct professor for the Alden March Bioethics Institute at Albany Medical College. Follow her at her website or Twitter @rickilewis

This story was originally published at the GLP on October 2, 2018.

Farm fraud: Consumers spend billions on food that might not be organic

Organic food sales are growing rapidly, with an estimated worth of $272.18 billion by 2027. The premium prices paid for ‘organics’ provides plenty of incentive to pass along conventionally grown products as organics

The Organic Farmers Association, in its report The Tragedy of Fraud, they state:

  • “Organic sales are booming, but unfortunately it seems, so is fraud.”
  • “The scale and elaborate nature of the fraud over the past decade spans hundreds of truckloads, numerous large ocean-going vessels, and hundreds of millions of dollars.”

In September 2017, the USDA published its audit report of the National Organic Program, involving seven U.S. ports of entry; they concluded the following:

  • “The Agricultural Marketing Service was unable to provide reasonable assurance that the National Organic Program (NOP) required documents were reviewed at U.S. ports of entry to verify that imported agricultural products labeled as organic were from certified organic foreign farms and businesses that produce and sell organic products.”
  • “… we found that if the shipment’s owner elects to treat the organic agricultural products, they are treated using the same methods and substances used for conventional products. There are no special treatment methods for organic products. This practice results in the exposure of organic agricultural products to NOP prohibited substances.”

Lack of effective oversight is a significant problem for organics

There are just 77 accredited certifying agents authorized to police and certify operations world-wide. Of these, 60 are based in the U.S., and the remaining 17 in foreign countries. [1] The USDA reports they are responsible for overseeing 44,896 certified organic operations world-wide in 2019, of which 28,257 of these are located in the U.S. This equates to roughly 583 facilities for each certifying agent to monitor, but the certifying agent is only responsible for sampling and conducting residue testing on five percent of them annually. As a result, it is possible that only 29 of the 583 organic facilities could be evaluated for any given year, leaving, theoretically, 554 organic operations unaccountable for years.

A California based organic certifying agency stated they are responsible for 50 operations but only “take samples from 3 operations each year.” [2] Considering that California has the most organic growers (3,108), 50% of the nation’s organic fruits and vegetables totaling more than $3 billion in sales, this limited oversight is problematic and allows for plenty of bad behavior by unscrupulous marketers. Another weak link in the chain of monitoring is that of California’s 28 listed certified agents registered with the state, only nine are located in California – many rely on local contract inspectors to conduct the organic inspection for them.

The results of limited surveillance

In a 2019 USDA summary of organic oversight and enforcement, they state, “722 operations in 50 countries around the world lost their certification through suspension or revocation.” As of 12/31/2019, 465 complaint cases are still in progress related to a range of issues – the majority, 55%, were uncertified organic claims. But other issues like prohibited practices, pesticide issues, and fraud may expand that percentage to almost 75%. Of the 722 organic operations suspended, the U.S. and Mexico have received the most suspensions, with 383 and 101, respectively.

Regardless, the financial incentive precipitates behavior in some growers to falsely label their conventional crop as organic. According to the Fraudulent Organic Certificates list from the USDA,  since 2011, there have been 147 total cases of fraudulent certificates created and used, 55 of which were detected in the last two years. This data illustrates that even with limited industry oversight in the various points of the supply chain, there is a significant problem separating authentic organic from the imposters.

Insufficient reporting of pesticide residues

As I have written, organic does not mean that fruits and vegetables are not treated with pesticides. A study reported in Food Chemistry:X evaluated 136 commercial samples of organically certified produce originating from 16 different countries. They found that “21% of the samples analyzed presented pesticide residue.” Harmless, of course, but present. This is consistent with an Irish study in 2014, where researchers found that “of the 27 organic samples tested, 15, or 55%, contained one or more detected pesticide residues.” They concluded, “that it cannot be said that organic fruits and vegetables are void of pesticides based on the results of this study.”

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

If the number of ‘organic’ infractions is this significant, with an acknowledged limited degree of surveillance and testing, one can only assume the scope of the fraudulent organic problem is a far more significant issue than the NOP would like to admit.

Over this six-part series, I have demonstrated that purported organics are neither safer, healthier, tastier, better for the environment, and may not even be organic in the first place. It begs the consumer question: “What am I paying the premium for?”

It seems appropriate as the final comment for this series to reiterate a central point made in the series’s first article.

“Our report finds consumers have spent hundreds of billion dollars purchasing premium-priced organic food products based on false or misleading perceptions about comparative product food safety, nutrition, and health attributes. The research found extensive evidence that widespread, collaborative, and pervasive industry marketing activities are a primary cause for these misperceptions. This suggests a widespread organic and natural products industry pattern of research-informed and intentionally-deceptive marketing and paid advocacy.”

Footnotes

[1] personal email correspondence, 1/11/21 Joan Avila. Secretary of the National Organic Program

[2] personal email correspondence, 1/19/2021. An inspector from the County of Marin. Department of Agriculture, Weights, and Measures. Novato, CA

David Lightsey, M.S. is a Food and Nutrition Science Advisor with Quackwatch.org. He is the author of “Muscles Speed and Lies, What the Sport Supplement Industry Does Not Want Athletes or Consumers to Know.”

A version of this article was originally posted at the American Council on Science and Health website and has been reposted here with permission. The American Council on Science and Health can be found on Twitter @ACSHorg

Reversing aging: We can turn back cognitive decline in mice. Will the same techniques work on humans?

The ageing global population is the greatest challenge faced by 21st-century healthcare systems. Even COVID-19 is, in a sense, a disease of ageing. The risk of death from the virus roughly doubles for every nine years of life, a pattern that is almost identical to a host of other illnesses. But why are old people vulnerable to so many different things?

It turns out that a major hallmark of the ageing process in many mammals is inflammation. By that, I don’t mean intense local response we typically associate with an infected wound, but a low grade, grinding, inflammatory background noise that grows louder the longer we live. This “inflammaging” has been shown to contribute to the development of atherosclerosis (the buildup of fat in arteries), diabetes, high blood pressure , frailtycancer and cognitive decline.

Now a new study published in Nature reveals that microglia – a type of white blood cells found in the brain – are extremely vulnerable to changes in the levels of a major inflammatory molecule called prostaglandin E2(PGE2). The team found that exposure to this molecule badly affected the ability of microglia and related cells to generate energy and carry out normal cellular processes.

Fortunately, the researchers found that these effects occurred only because of PGE2’s interaction with one specific receptor on the microglia. By disrupting it they were able normalise cellular energy production and reduce brain inflammation. The result was improved cognition in aged mice. This offers hope that the cognitive impairment associated with growing older is a transient state we can potentially fix, rather than the inevitable consequence of ageing of the brain.

Reversing cognitive decline

Levels of PGE2 increase as mammals age for a variety of reasons – one of which is probably the increasing number of cells in different tissues entering a state termed cellular senescence. This means they become dysfunctional and can cause damage to tissue by releasing PGE2 and other inflammatory molecules.

Macrophage cell. Credit: Kateryna Kon/Shutterstock

But the researchers also found that macrophages – another type of white blood cells related to microglia – from people over the age of 65 made significantly more PGE2 than those from young people. Intriguingly, exposing these white blood cells to PGE2 suppressed the ability of their mitochondria – the nearest thing a cell has to batteries – to function. This meant that the entire pattern of energy generation and cellular behaviour was disrupted.

Although PGE2 exerts its effects on cells through a range of receptors, the team were able to narrow down the effect to interaction with just one type (the “EP2 receptor” on the macrophages). They showed this by treating white blood cells, grown in the lab, with drugs that either turned this receptor on or off. When the receptor was turned on, cells acted as if they had been exposed to PGE2. But when they were treated with the drugs that turned it off, they recovered. That’s all fine, but it was done in a petri dish. What would happen in an intact body?

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

The researchers then carried out one of the cleanest experiments it is possible to perform in biology and one of the best reasons for working on mice. They took genetically modified animals in which the EP2 receptor had been removed and allowed them to grow old. They then tested their learning and memory by looking at their ability to navigate mazes (something of a cliche for researchers) and their behaviour in an “object location test”. This test is a bit like someone secretly entering your house, swapping your ornaments around on the mantelpiece and then sneaking out again. The better the memory, the longer the subject will spend looking suspiciously at the new arrangement, wondering why it has changed.

It turned out that the old genetically modified mice learned and remembered just as well as their young counterparts. These effects could be duplicated in normal old mice by giving them one of the drugs that could turn the EP2 receptor off for one month. So it seems possible that inhibiting the interaction of PGE2 with this particular receptor may represent a new approach to treating late-life cognitive disorders.

There is a long way to go before we are in a position to start using these compounds in humans – even though the prostaglandin systems are very similar. But this study has shed light on a fascinating set of observations linking diet and cognition.

It has been known for some years that eating blueberries and other fruit and vegetables, such as strawberries and spinach, improves cognition in rodents and older people. These foods are rich in molecules such as resveratrolfisetin and in quercetin, which have been shown either to kill or rescue senescent cells.

There is also evidence that they block PGE2 at the cellular level, providing another route by which these compounds may exert their beneficial effects. Until something better comes along, this is one more piece of evidence that a bowl of fruit won’t do you any harm. Though it’s probably wise to go easy on the cream.

Richard Faragher is a Professor of Biogerontology at the University of Brighton and is past Chair of both the British Society for Research on Ageing and the International Association of Biomedical Gerontology. His primary research interest is the relationship between cellular senescence and organismal ageing. 

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

Viewpoint: Promoting science with ideology — Pro-GMO vegans use animal rights advocacy to boost vaccine, biotech acceptance

The COVID-19 pandemic has reminded us that we are part of a living, evolving ecosystem and often at its mercy. Despite all our accomplishments as a species, a virus accidentally unleashed on the world has wrought enormous destruction around the globe, the effects of which we probably will not be able to fully assess for many years. Although we cannot always anticipate the damage an infectious disease will do, our best bet at surviving the fallout is a commitment to science-based policies that fuel the development of better preventative strategies, most importantly vaccines. The same lesson extends to most environmental and public health challenges we face.

To many people, though, a vaccine isn’t a biological roadblock to the spread of infectious disease, but a scheme hatched by “Big Pharma” and their stooges in government to control humanity. It’s appropriate to maintain some skepticism of corporations and the governments that regulate them, indeed such critical thinking should be encouraged among consumers. Nevertheless, healthy skepticism and cynicism are not the same, and people must learn to distinguish the two if we are going to make progress in our never-ending battle against infectious disease and other maladies that threaten humanity.

While this sometimes seems like an impossible task to science advocates, the pro-GMO vegan community has illustrated how people with deep ideological commitments can embrace science, specifically crop biotechnology and vaccines, without compromising their personal beliefs.

If you want to convince someone to change their mind on a controversial issue, don’t attack their worldview, which all but guarantees they will dismiss your arguments as a threat to their identity. This is a lesson Vegan GMO, a small community founded by friends with a passion for animal welfare, has taken to heart. Rather than attack the ideology of their target audience, the group uses their shared beliefs to encourage acceptance of crop biotechnology and vaccines in the broader vegan community.

The vegan case against GMOs

Vegans sometimes oppose biotechnology because a particular application of the technology may be tested on animals or developed using animal products. This categorizes animals as property to be used for human benefit rather than sentient, living beings—an outlook many vegans find abhorrent.

But vegans do not just make animal-welfare arguments, they often rely on anti-GMO misinformation, like the long-debunked link between consuming GM crops and developing liver and kidney problems. Popular veganism proponents such as retired activist Gary Yourofsky have also latched onto “playing God” arguments based on the assumption that natural food is better food. “God made a tomato perfectly when he created it. Leave it at that,” he argued during a 2015 interview. “Stop altering tomatoes, stop altering everything on this planet. It’s fine the way it was created.”

Jayson Merkley, a pro-GMO vegan and fellow at Cornell University’s Alliance for Science, says the answer to this sort of rhetoric is simple: stop testing GM crops on animals, which is sometimes required before a new product can enter the food supply. This simple change in the GM crop approval process would discourage vegans from repeating pseudo-scientific anti-GMO arguments to defend their position on animal welfare.

This may raise concerns about untested products getting into our food supply, but there is little need to worry. After more than two decades of research from around the world, we know that genetically engineered crops—even insect-resistant plants (Bt corn, cotton etc.) that contain natural pesticides—are just as safe as conventional varieties, and more animal feeding studies are therefore unnecessary. The US Food and Drug Administration (FDA) and other agencies that regulate GM crops evaluate the safety of new products using a rigorous series of tests based on a concept called “substantial equivalence,” which is designed to demonstrate that the novel food item contains the same levels of macro- and micro-nutrients, anti-nutrients and potentially toxic molecules as its already approved non-GM counterpart. If there is no significant difference, the GM food is considered safe for consumption.

Moreover, crops developed through new breeding techniques like CRISPR generally do not contain “foreign” DNA from another species, as their GMO predecessors did. Regulators in the US, Canada and many Latin America countries therefore do not mandate the same burdensome review process before approving these plants. Instead, the new crops follow the same regulatory path as conventionally bred crops, which typically does not include animal testing. The one exception may be when a brand new trait is introduced into a food crop. As with vaccines (which we’ll examine below), animal testing may be necessary in this case. But this is the exception that confirms the rule we’ve laid out above.

Ironically enough, conventional and organic crops may pose a potentially greater health risk since they are the result of far less precise plant breeding techniques, though of course no approved food is considered harmful and no one is demanding animal feeding studies for organic or conventional crops.

But we can take the argument one step further. Unnatural Vegan, a popular YouTube-based science communicator (and former GMO skeptic), says her vegan allies could better promote animal welfare by parting ways with the anti-biotech movement, which in many cases lobbies for animal testing of GM plants no matter how extensive the evidence supporting their safety.

Credit: Unnatural Vegan

GMOs promote animal welfare

Eliminating animal feeding studies for GM crops is not the only way vegans can be encouraged to accept science. Simply educating them about how genetic engineering has already alleviated animal suffering is another useful strategy. Merkley likes to tell his GMO-skeptical vegan friends that insulin, previously harvested from slaughtered cows and pigs, is now produced via fermentation. Scientists transfer the DNA that controls insulin production from human pancreas cells into bacteria, which then multiply and produce vast quantities of the hormone to treat diabetics.

As a matter of social justice, it is important to mention that creating transgenic insulin does not guarantee all diabetics equal access to the drug, which is still costly for reasons beyond the scope of this article. But this animal-free production method calmed fears of a possible insulin shortage years ago, making its introduction an important step forward for health care and, for the vegans reading this, animal welfare.

COVID-19 vaccines and animal testing

Harmonizing the vegan community’s stance on vaccines and animal rights has been one of the more challenging tasks for science advocates in the vegan world. The answer you get on vaccines often depends on who are you asking, but in general terms, the use of animals in vaccine development and testing is still a point of contention for many vegans. In a debate over the ethics of vaccines published in Vegan Life Magazine, the skeptical side was dominated by people who said they avoid using animals, including medicine that has been tested on animals. But as with GM crops, the rhetoric here was bolstered by traditional anti-immunization arguments, namely that vaccines are the product of a huge pharmaceutical industry that favors profit over health.

Animal testing may no longer be needed as vaccine science evolves. Right now, though, animal testing is necessary to guarantee the safety and efficacy of vaccines for humans. That said, many vegans have taken a nuanced stance on this difficult issue by prioritizing their values. Yes, animal testing is frowned upon, they say, but the current pandemic has reminded us why vaccination is so important. Several vegan associations have made public statements to that end, encouraging their supporters to prioritize their well-being (after all, humans are animals, too!) as a means of advancing their ideological goals.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

The Vegan Society, for instance, released a statement in December 2020 encouraging its members “to look after their health and that of others, in order to continue to be effective advocates for veganism and other animals.” By encouraging individuals to take responsibility for their actions and inviting them to make informed decisions about vaccines, the society’s position matches Merkley’s take on vaccines:

We have a strong moral obligation to our communities to be vaccinated to prevent illness. [Taking] medication doesn’t make you less vegan, and I don’t think we need to tolerate ableist views that say otherwise …. Because veganism is viewed as a movement toward the liberation of both human and non-human animals …. this is not a contradiction.

Even animal rights group PETA, which hasn’t been shy about its opposition to testing agricultural and medical products on animals, has endorsed COVID-19 vaccines, recognizing that

The goal of being vegan and advocating for animal rights should always be to bring about positive change for animals. As long as tests on animals are a legal requirement, refusing to take a medicine on ethical grounds will not help animals who have already been used in tests or spare any the same fate in the future.

The greater risk: myths and conspiracy theories

Science denialism has created an alternative reality where climate change does not exist, GMOs are a colonialist plot to control developing countries, and vaccines are poison pushed by “Big Pharma” to make a profit. These are not just fringe internet conspiracies; they are grave threats to the advances in medicine and agriculture that allow more people to live longer, healthier lives. We have to discredit these conspiracies before they do more damage to public trust in science.

As our pro-GMO vegan friends have demonstrated, the best way to do that is to change the minds of people who find fringe ideas compelling, using their own values to do it. The alternative is to let dangerous misinformation spread unchallenged. Having lived through a pandemic made worse by rampant junk science, we know where that leads.

Luis Ventura is a biologist with expertise in biotechnology, biosafety and science communication, born and raised in a small town near Mexico City. He is a Plant Genetic Resources International Platform Fellow at the Swedish University of Agricultural Sciences. Follow him on Twitter @luisventura

Viewpoint: Female, younger, better-educated and affluent – How ‘alternative medicine’ has taken America by storm and endangered lives

I am a skeptic and a curmudgeon, so I was surprised when a friend of 30 years asked if she could add me to her “Reiki Grid.” A Reiki Grid, I soon discovered, is a pattern made with crystals, allowing a Reiki practitioner to send “healing energy” to individuals whose names or pictures are placed on the grid.

My friend is not a contemporary shaman. She is a hard-nosed, highly competent 37-year-old female attorney. She also has Stage IV breast cancer. Since her diagnosis, besides conventional cancer treatment, she has turned to alternative medicine and treatment modalities, including Reiki, which she credits for playing a large part in her health remaining stable at the moment. And what is Reiki? If I’m being diplomatic, I would describe it as “energy healing,” a sub-type of alternative medicine involving the practitioner placing their hands lightly on or just above the patient’s body in order to “transfer” energy. If I’m being truthful, I’d describe it as utter woo—a glaringly obvious pseudoscience.

And yet it is a pseudoscience many women of my demographic embrace. My lawyer friend was not the only chum from my childhood diagnosed with cancer this year. Another friend in her 30s, a former nurse, was diagnosed with early-stage neuroendocrine cancer of the cervix. Anxious to learn more about what they were going through, and dissatisfied with the dry statistics available online, I began reading cancer blogs (and other forms of social media), focusing on ones written by women around our age, in their 30s and 40s. This led to breast cancer blogs, which in turn led to metastatic breast cancer (MBC) blogs. What started as a layperson’s dabbling became an obsession. I probably read 30 MBC blogs, going through their entire archives. In my unscientific way, I began picking out patterns. One was that the majority of these youngish female bloggers tried using alternative medicine at some point, as their diseases progressed.

People dying of cancer are often desperate. Presumably, some are going to make unorthodox choices they might not have made when they were in good health. After all, what do they have to lose? Still, I was surprised by the high percentage of young women who bothered exploring alternative options that, to me at least, were a waste of time and money—things like Reiki, reflexology, coffee enemas (!), daily affirmations, homeopathy, alkaline-“purified” water, and so on. The list was endless. Many of these bloggers were well-educated professionals: office administrators, nurses, psychiatrists, professors, and teachers. A number of them were lawyers, like my friend. Shouldn’t these women, of all people, I thought, be less susceptible to the dubious allure of non-science-based medicine? And they weren’t just using it casually, hoping to reduce stress after a devastating diagnosis. Many actually believed that if they just hit on the right combination of alternative medicine options—say, intermittent-fasting, yoga, and acupuncture—that they could beat back, or even cure their cancer. It was magical thinking from a group of people I expected to be rational.

But they were not. I began researching further, and soon learned that metastatic breast cancer bloggers were not outliers when it came to embracing alternative medicine. They fit nicely into a larger demographic trend. Younger women who are middle-class or above are more likely than any other group to use alternative medicine. But why?

Female, younger, better-educated, and wealthier

Before going further, a note on terminology: in this article, I will mostly be using the term “alternative medicine” to refer to any medicine or treatments that have not been scientifically proven, and are generally considered outside the realm of conventional medicine. Other terms are used more or less synonymously, such as the most recently coined term, so-called “integrative medicine.” Holistic medicine (“whole-body”), and CAM (complementary and alternative medicine) are also often used. CAM is likely the most commonly used term in the alternative medicine community, although as Dr. Steven Novella, American neurologist and alternative medicine skeptic states, it is little more than a marketing brand. I will not be using this euphemism because my research indicates there is usually nothing “complementary” about adding alternative medicine to conventional medicine.

The alternative medicine industry is enormous and growing. In 2019, it generated an estimated global revenue of approximately 69 billion USD. According to the last broad survey conducted by the US National Center for Complementary and Integrative Health (NCCIH), from 2012, “natural products” are the most popular form of alternative medicine in America, a survey category comprised largely of supplements such as herbs, vitamins, and probiotics. 17.7 percent of survey respondents used natural products. Mind-body practices were next in popularity, including deep breathing (10.9 percent), yoga/tai chi/qi gong (10.1 percent), chiropractic or osteopathic manipulation (8.4 percent), meditation (8.0 percent), and so forth.

And who primarily is taking all this ginseng and doing all this meditation? Overwhelmingly it is women. A secondary analysis of the data collected in the 2012 NCCIH survey indicated that women were about three times more likely than men to use alternative medicine. Compared to people who do not use alternative medicine, users were more likely to be “female, reside in the Midwestern or Western USA, be non-Hispanic White, have a bachelor degree or higher, have higher personal earnings, be married or living with a partner, and have greater family spending on medical care.” This general profile was consistent across the literature. For example, a 2018 study researching links among alternative medicine-use, conventional cancer treatment refusal, and overall survival, noted that users were more likely to be female, of higher socioeconomic status, and better educated. In addition, they were more likely to be younger and to have private health insurance.

In other words, these were women with options: largely white, middle-class women. Unlike many women in developing countries who do not have access to science-based medicine, and are forced to rely on low-tech natural options (and suffer the health consequences), the women who use alternative medicine in the West do not need to, they choose to. Is this necessarily a bad thing? The answer is Sometimes.

Contrary to popular opinion, alternative medicine is not always harmless, and when patients use it instead of conventional medical treatment, it can even be deadly. In the 2018 study cited above, researchers found that cancer patients who used some form of alternative medicine alongside conventional medical treatment did not fare worse than those who chose conventional medicine alone. However, when people using alternative medicine chose it as their sole form of health treatment—while refusing recommended conventional cancer treatment like surgery, chemotherapy, and radiation—they died sooner. A 2012 University of Alberta retrospective study of breast cancer patients also showed significantly worse survival rates for alternative medicine users who declined standard treatment. Furthermore, the 2018 study also found that alternative medicine-users were much more likely to forego some form of standard cancer treatment compared to non-users. (Interestingly, a 2016 study showed that dietary supplement use, but not mind-body practices, was associated with skipping chemotherapy.) Taken as a whole, the research indicated that for patients with serious illnesses, just dabbling with alternative medicine could ultimately lead to dangerous health decisions like delaying or refusing conventional treatment. And as some doctors have warned, wasting precious time on alternative remedies can allow cancers to spread to the point of becoming untreatable.

Despite these dangers, the alternative medicine industry is thriving among women. Scholarly literature over the past few decades has begun exploring the reasons why well-educated, middle-class women are increasingly opting for it. One study described the alternative medicine subculture as being identifiable by “commitment to environmentalism, commitment to feminism, and interest in spirituality and personal growth psychology.” These words conjured up images of many of the women I went to graduate school with: politically liberal, intelligent, and faintly granola, with a love of Pilates and all things organic.

Similarly, a 2007 literature review on the beliefs of alternative medicine users showed that they often had the following characteristics: postmodern, rather than conventional belief systems; an appreciation of health approaches perceived as “non-toxic” and holistic; a belief in psychological factors as a cause of illness; and a view of themselves as both unconventional and spiritual. Finally—and most significantly, I believe—there was a great deal of evidence suggesting that alternative medicine users “want to participate in treatment decisions, are likely to have active coping styles and might believe that they can control their health.”

These last findings resonated. If one theme jumped out while I was reading the metastatic breast cancer blogs, it was that these women were on a quest for control, both over their illnesses and their lives.

The quest for control

Statistics and scholarly articles are useful, but they only tell part of the story. Reading the blogs and other social media posts written by metastatic breast cancer patients gave a human face to the numbers. One tale played out all too often, I noticed. The personal details varied, but the general outline was remarkably consistent: A-type woman gets cancer, is shocked; A-type woman battles back, using mostly conventional medicine; cancer progresses; A-type woman gets sicker, blames conventional medicine for being ineffective and disempowering, and begins/continues/ramps up alternative medicine-use to take control of the situation; cancer progresses; A-type woman dies.

Mixed among those timeline elements there were sometimes remission or non-progression periods of months or even years. Too frequently, the MBC bloggers and social media users attributed these periods of stable health to the magical healing properties of Vitamin C drips/mindfulness/magnet therapy/cannabis/insert quackery here. Sometimes they mistook these periods for being cured, and distressingly, began preaching the gospel of whatever particular alternative option they credited for curing them.

One such example was Stefanie LaRue of Venice Beach, California, a beautiful and charismatic advocate for Rick Simpson Oil (RSO), a cannabis oil with a comparatively high concentration of THC. LaRue was diagnosed with Stage IV breast cancer in 2005, aged only 30.

Stefanie LaRue

Remarkably, LaRue battled this highly lethal cancer for eight years, undergoing six rounds of chemotherapy and a mastectomy, in addition to other conventional treatment, before her third recurrence in 2013. At this point she decided to forego further chemotherapy, opting for RSO instead, which she researched on her own. In an interview with Medical Jane (a perhaps less than unbiased source), LaRue claimed, “Cannabis oil killed all of the tumors in my body. My monthly lab and quarterly scan results are proof that the cannabis oil treatment worked.” LaRue’s last clear scan was in December 2014. Based on her social media posts, there is limited evidence that she availed herself of at least some of the conventional cancer treatment options available, along with the RSO. LaRue’s cancer “cure” lasted two and a half years. She died on May 31st, 2017.

In all likelihood, as oncologist Dr. David Gorski states repeatedly in his invaluable anti-quackery blog, Respectful Insolence, it is conventional treatment—not cannabis or what have you—that can help slow down disease progression for patients like LaRue and others. Gorski explains that surgery (mastectomy or lumpectomy) alone is often fairly effective for locoregional control of breast cancer in its early stages, reducing the local recurrence rate to 30–40 percent. Radiation on top of the surgery brings the possibility of recurrence down to 10 percent. Chemotherapy and hormonal therapy help achieve greater systemic control and thus further improve patient survival, but this effect is most impressive with advanced cancers. RSO was almost certainly not what was keeping LaRue alive. Surgery and chemo was. Furthermore, there is no credible scientific research showing that any of the alternative medicine products or modalities can cure cancer. Repeat: Reiki and weed cannot cure cancer.

Another woman whose story struck me as particularly poignant was Rachel Petz Dowd, wife and mother of three, and a former Miss New Hampshire. Dowd died in 2016, age 47, after a two-year battle with breast cancer. During that time, she wrote a blog entitled, Rachel’s Healing: My Journey to Heal Breast Cancer. As the title suggests, Dowd appeared to believe she would fully recover—right up to her final posts written from a pricy Mexican alternative treatment cancer hospital. Dowd’s blog is replete with references to the supposed healing powers of alternative medicine, which she favored for many of the reasons outlined in the scholarly literature cited above. After reading so much unscientific drivel from alternative medicine practitioners, I recognized the language right away. Dowd habitually wrote about “detoxifying” her body and strengthening her immune system, something she believed could be achieved through practices like “clean” eating, juicing, and even taking coffee enemas.

Rachel Petz Dowd

Near the beginning of the blog, Dowd laid out her extensive and punishingly restrictive anti-cancer health routine. Her daily diet included the following: essiac and chaga teas; various supplements, including fish oil and iodine; lemon water with fulvic acid mineral complex; Budwig Diet elements (an unproven anticancer diet consisting largely of flaxseed mixed with cottage cheese); Laetrile; free-range meat; salad; and Chinese herbs. She restricted herself from eating “sugar, fruit, artificial sugars… alcohol of any kind… things that can turn into sugar such as grains, breads, beans, corn, rice… root veggies that contain sugar either [sic] such as beets, carrots, and no vinegar or yeast of any kind. No dairy except for high quality eggs [?] and organic low-fat cottage cheese, etc…” For treatment, Dowd favored things like magnet therapy, meditation, vitamin IV infusions, infrared sauna sessions, acupuncture, hyperbaric oxygen chamber sessions, and much, much more, apparently making multiple trips weekly to a nearby naturopath clinic.

It was obvious to me how Dowd had become a Miss New Hampshire winner: besides her beauty and energy, this was a woman possessed of rigid self-discipline. If the blog was any guide, she regarded healing from cancer as her full-time job, and scheduled every minute of her day accordingly. Her mania for self-control extended beyond therapy and diet. Dowd even went so far as to feel guilty about not being able to control her thoughts, confessing, “I need to work more on finding ways to get my mind thinking about fun, joy, and laughter more often, so I am pleased we are heading into summer with the kids.”

As Gorski observes with sensitivity in his anti-quackery blog, a breast cancer diagnosis, and the challenges of navigating an often impersonal medical system, can cause once-confident women to feel like they have lost control over their lives. Alternative medicine practitioners appear to offer their patients a “human touch.” Gorski states that when a woman decides on alternative options, “she often sees herself as ‘taking control’ of her treatment from uncaring doctors whose treatments, she is told, do not treat the root cause of her disease. Understandably, she may feel liberated and back in control.”

This was precisely my impression after reading about LaRue and Dowd’s cancer experiences, and their decision to rely more heavily on alternative medicine. In a letter to her oncologist, Dowd claimed she took an “integrative” approach to medicine (this is a common refrain among alternative medicine users), but her distrust of chemotherapy and radiation is evident throughout the blog. At one point, she decided against the oncologist’s recommended radiation treatment, despite being made aware of the higher rate of recurrence without it, instead opting to use her “arsenal of nurturing therapies and supplements.” Later on, Dowd blamed chemotherapy for “tearing down my immune system,” instead of crediting it for keeping her cancer at bay as long as it did. Her general attitude, expressed early on in the blog, is this: “When I look at nourishing solutions I feel empowered. When I consider chemo and radiation I feel helpless.”

When empowerment becomes toxic

There is nothing obviously empowering about being diagnosed with Stage IV breast cancer, though there is a determined media campaign to make it so. Unlike earlier-stage breast cancer, it’s a terminal disease. At this stage, the cancer has spread—metastasized—through the bloodstream to a secondary location/s in the body, most commonly the bones, liver, lungs, or brain. Median survival after diagnosis is a scant three years.

That doesn’t mean a patient should lose all hope and assume death is imminent. Eleven percent of women under the age of 64 diagnosed with metastatic breast cancer will live 10 years or more, and there are rare instances of patients living as long as 18 years plus, and enjoying relatively long periods of stable health. But I found it bizarre—and willfully perverse—that it had become common for patients suffering from this debilitating and deadly disease to refer to themselves as “thrivers.” Some were compelled to explain how their diagnosis was an “opportunity for growth.” Some poured their battered bodies into lingerie and fishnets, and walked the catwalk. Others maintained glamorous Instagram accounts whilst dying in a hospital bed. And the pink that has become a highly recognizable marketing brand for breast cancer research—my God, all that cheery, lobotomized pink drove me mad.

These assertions of selfhood struck me as desperate and sad. It was also notable that these behaviors were common among women with cancer, not men. Men with prostate cancer do not, apparently, feel the pressing need to show the world that they are “still empowered, strong and sexy!” in the words of one designer who used breast cancer patients as runway models for her New York Fashion Week show.

I remembered my own mother’s brief, violent struggle with renal cancer. By no stretch of the imagination would I call what I saw then “thriving,” or being “empowered.” Yet none of the suffering she went through ever for a split-second took away her humanity. Why isn’t it okay, I wondered, for so many younger women dying of cancer to admit that this is a near unequivocally bad thing, that they felt fear and pain, and that they had little to no control over their disease? In a rare and revealing passage on her blog, Dowd finally admitted, in passing, to vulnerability: “I’ve always been great at appearing like everything is fabulous on the outside, and it was one of my strengths in the business world… never let anyone rattle me or see me sweat, but there are emotional scars from the ordeal of going through this process.” Evidently, there is a price to be paid for years of repressing negative emotions in order to always appear strong.

The story of Kate Callaghan, a New Zealand-based nutritionist, also illustrates the limits of “empowerment” as a helpful concept when it comes to terminal illness. Callaghan was yet another attractive, powerful, intelligent A-type woman (what I was beginning to recognize as a recurring breast cancer type) who had the misfortune of being diagnosed with Stage IV breast cancer. She died in June of this year, at 36 years old, just seven months after her diagnosis.

Kate Callaghan

Callaghan seemed to be the last person one would expect to get cancer. She branded herself “the Holistic Nutritionist,” and was also a personal trainer and lifestyle coach, “specialising in hormone healing.” She cohosted a podcast and had written a book on the topic, becoming interested in it after successfully overcoming her own hormone-based amenorrhea and infertility. Her online presence, including her Instagram account, is liberally sprinkled with the jargon of the 21st century alternative medicine industry, using words like “wellness,” “empowerment,” “holistic,” and “detoxification.” After going through her social media, and listening to some of her podcasts, I was struck by the fact that here was yet another instance of a younger woman with terminal breast cancer who seemingly believed she could cure herself through a masochistic “health and wellness” routine.

In one podcast, Callaghan described said routine, mentioning, among many other things, doing hot yoga; taking numerous supplements and vitamins (including Echinacea, curcumin, broccoli sprout power, B17, Vitamin D, Omega 3s, etc.); injecting her breast with mistletoe extract; “juicing;” and doing intermittent fasting (she only allowed herself to eat food eight out of every 24 hours). She sheepishly admitted to drinking coffee once a day, careful to point out that the coffee was organic and malt-free, and that she took it with but a single drop of Stevia, some coconut oil, and 5 or 6 cardamom pods (“cardamom has been shown to kill cancer cells,” she claimed).

I found her Instagram account to be similarly rigorous and similarly distressing. With a terminal diagnosis, I would’ve headed straight into the loving arms of Brother Bourbon and Sister Chocolate, but Callaghan displayed heroic self-control. Besides maintaining a virtuous diet, she did things like take a home sauna “every second day for 45 mins at 50 degrees C.” She admitted to discomfort: “Yes, it’s f-ing hot, but it needs to be for my current health condition.” The wellness laundry list in another of her Instagram posts is exhausting just to read. Instead of “fighting cancer,” a metaphor she disliked, Callaghan wrote, “I focus on how can I nourish myself more? How can I flood my body with the nutrients it needs to function optimally, while avoiding things that throw it off kilter? How can I calm my mind and decrease my stress levels, which will strengthen my immune system? How and where can I find more joy and laughter in my life? How can I reduce my exposure to chemicals? How can I show myself more kindness each day, and every second and minute of each day?” [Italics added for emphasis.]

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Like Dowd, Callaghan attempted, with military precision, to schedule “joy” into her life, adding it to the many tasks she felt she must complete that day. Reading this, I felt depressed that a woman so intelligent and strong placed such a heavy burden of perfection on herself, even whilst dying. In her final months, most of her Instagram posts are upbeat. One stands out, featuring a close-up photo of Callaghan’s face, red and swollen with tears. In the caption she wrote about feeling overwhelmed: “Some days I feel like I’m nailing this whole cancer thing. And then other days I just want to wake up and realise it was all a bad dream.” She then immediately apologized to her readership for the “downer of a post.”

In a podcast interview recorded just a few months before she died, Callaghan advised other cancer patients to remain hopeful and not to “buy into” a negative prognosis. Throughout the interview (with a few tell-tale faltering moments) she insisted she was going to recover from her terminal cancer, and said she refused to accept her prognosis as reality, lest it become her reality. She also made the unwarranted assertion that “there are people curing themselves all the time of Stage IV cancer.”

When the host asked her what belief people should embrace in order to live a fulfilled life, Callaghan paused a moment before replying, “That they get to choose their path.” She added, “In any given circumstance.” In the context of the interview, it was clear she was not just saying, stoic-like, that she had the power to choose how to face her own death. No, she was implying that she could overcome premature death through some combination of magical thinking, willpower, and alternative medicine. It was absurd, of course. Callaghan had no choice. As I listened to her voice, I could sense a steely will cocooned inside a fragile body. I knew by then that Kate Callaghan was married and had two small children. I wept.

Stefanie LaRue, Rachel Petz Dowd, Kate Callaghan were women of courage and determination. Writing this essay, I was always aware that they were not just evidence of a thesis, but someone’s mother, sister, daughter, and friend. Who among us can say with certainty that we would handle such a dire diagnosis as terminal cancer with as much grace as they did? I cannot. Our time will also come, but there are lessons we can learn from their stories while we are yet living.

The obvious one is not to place undue trust in alternative medicine: Practising yoga may well reduce stress, but it cannot cure Stage IV cancer. The spiritual lesson is more profound. There comes a point, and middle-class Western women have reached it, when the feminist creed of empowerment becomes toxic. We can complete a medical degree, raise a family, run a marathon, but we cannot outrun death. The same masterly control, the same genius for organization that A-type women apply successfully to excel in their careers, is applied in vain to tame terminal illness. When we accept that there are limits to our control, we free ourselves from a crushing psychological burden—a burden heavier than terminal cancer, a burden heavier, perhaps, than even death.

S. Stiles is a writer based in Los Angeles. Her 2020 campus novel, The Adamant I, is a criticism of the contemporary university. She holds a PhD in English and worked as a college professor for five years. She can be contacted at [email protected]

A version of this article was originally posted at Quillette and has been reposted here with permission. Quillette can be found on Twitter @Quillette

gmo corn field x

GMO FAQ: Do GMO Bt (insect-resistant) crops pose a threat to human health or the environment?

Bt is a bacterium found organically in the soil. It is extremely effective in repelling or killing target insects but is harmless to beneficial insects, reptiles, birds and mammals, including humans, which cannot digest the active Bt proteins.

Bt was first used by farmers in France in 1920 and gradually adopted by organic and some conventional growers. In the US, Bt was first used commercially in 1958; by 1961 it was registered as a pesticide by the Environmental Protection Agency (EPA). Following World War II, scientists began developing powerful synthetic pesticides that were widely used into the early 1960s. Some insects evolved resistance to these chemicals, sparking research by industry and government that led to the engineering of plants that could naturally express Bt proteins. Scientists were able to transfer the genes that encode the crystal, or Cry, proteins that are toxic to insects into the genomes of certain crops. When hungry insects try to eat the plants, the pests consume the toxin and die in a matter of days.

Bt corn was the first of these crop varieties, introduced in 1996. Scientists have since isolated thousands of Bt strains, many of them incorporated into different varieties of GMO crops, including corn, brinjal (eggplant), potato and cotton. Today, Bt varieties comprise more than 80 percent of the cotton and corn (used mostly for animal feed, although also some sweet corn) grown in the US. Bt potatoes are no longer grown due to a lack of demand from farmers. At the end of 2017, an estimated 23.3 million hectares of land were planted with crops containing Bt genes. The following table shows the countries that have commercialized Bt crops (with single and stacked genes) and their products, from 1996 to 2017.

Image credit: ISAAA GM Approval Database

Overall insecticide use in the US has plummeted since the mid-1990s, largely because of Bt commodity crops. A meta-analysis published in the journal PLoS One looked at the environmental impact of Bt cotton in the US between 1995-2015. The authors found that the insect-resistant crops cut pesticide use and crop losses associated with common pests like the bollworm, corn earworm and tobacco budworm between 47 and 81 percent, depending on which region of the country they examined.

More than 17 percent of the brinjal in Bangladesh grown since 2014 is Bt engineered to kill eggplant fruit and shoot borer (BFSB). Historically, virtually all brinjal farmers in Bangladesh relied solely on insecticide sprays to control BFSB, with farmers applying as many as 84 insecticide sprays during the growing season, now reduced to 1 or 2 applications, leading to a sharp drop in pesticide-related illnesses. A 2018 study found farmers saved 61 percent of pesticide costs compared to non-Bt brinjal farmers, experienced no losses due to insect attacks (versus non-Bt farmers who experienced 36-45 percent infestation) and earned higher net returns.

Bt cotton was commercially introduced in India in 2002 (but years earlier on the black market), and now makes up 95 percent of the market. The genetically engineered crop increased yields 30-60 percent and household income by 18 percent. The Bt boom transformed the country from a net importer of cotton to the world’s third largest exporter behind only China and the United States.

Some scholars critical of genetic engineering have argued that India’s experience with Bt cotton has been oversold, and suggest that yield increases have had more to do with changes in fertilizer use and other factors unrelated to insect-resistant crops. Responding to this criticism in October 2020, agricultural economist Matin Qaim agreed that many other factors can affect yields. Yet after reviewing the relevant data, Qaim concluded:

“Results showed that—after controlling for all other factors—Bt adoption had increased cotton yields by 24%, farmers’ profits by 50% and farm household living standards by 18% …. The same data also revealed that chemical insecticide quantities declined by more than 40% through Bt adoption, with the largest reductions in the most toxic active ingredients previously sprayed to control the American bollworm.”


The incredibly quick adoption of Bt cotton seeds by Indian farmers accelerated a reduction in insecticide use already underway in Indian cotton fields. A study pegged the drop at an estimated 60 percent, avoiding at least 2.4 million cases of pesticide poisonings each year. As a corollary benefit, Bt seeds also reduced pesticide applications by non-Bt farmers because of the halo effect of an overall smaller pest population.

An organic industry trade group called the Organic Consumers Association argues that Bt crops are “lethal” to beneficial insects including ladybugs, butterflies and honeybees. There is no evidence to support that claim. A 2003 article authored by EPA researchers and published in Nature Biotechnology found that Bt corn, cotton and potato varieties do not pose a threat to beneficial insects, including honeybees, ladybugs and butterflies.

A 2016 review article examined 76 studies published over the preceding 20 years that investigated the impact of Bt crops on beneficial insects, natural pest controllers, bacteria, growth-promoting microbes, pollinators, soil dwellers, aquatic and terrestrial vertebrates and mammals. The authors wrote that no “….significant harmful impact has been reported in any case study related to approved [Bt crops].”

Anti-GMO osteopath and natural products salesman Joseph Mercola has maintained that Bt crops are fueling the development of “super-pests” resistant to the toxic Cry proteins produced by the plants, which in turn is driving up insecticide use.

“It’s clear that Bt plants have led to decreases in spraying,” Marcia Ishii-Eiteman, senior scientist at the Pesticide Action Network, told Grist’s Nathanael Johnson. “But,” she added, “as was predicted 10 years ago, we are starting to see the insect resistance to Bt.

There is some truth to this claim, as both organic farmers using Bt sprays and conventional farmers using Bt seeds have experienced the evolution of some insects resistant to the natural insecticide. There has been one notable problem. In Puerto Rico insects became nearly impervious to Bt corn in just three years.

Spodoptera frugiperda gained resistance to genetically engineered corn in Puerto Rico in just three years.

However, the insect-resistant crops have been a blessing to the environment, claimed University of Arizona entomologist Bruce Tabashnik. The European corn borer remains completely susceptible to Bt, a boon for corn farmers in the US Midwest. Insecticide use has continued its sharp downward trend in the US. And the use of organophosphates (a far more toxic and hazardous class of insecticide) fell 55 percent between 1997 and 2007, in large part because of the use of insect-resistant transgenics. In January 2021, Tabashnik and colleagues reported that Bt cotton (along with sterile pink bollworm moths) helped rid the cotton-growing areas of the continental United States and northern Mexico of the pest, which cotton growers in both countries had battled for a century:

“The removal of this pest saved farmers in the United States $192 million from 2014 to 2019. It also eliminated the environmental and safety hazards associated with insecticide sprays that had previously targeted the pink bollworm and facilitated an 82% reduction in insecticides used against all cotton pests in Arizona.”

Anti-GMO activists have also argued for many years that Bt crops pose a serious threat to human health. Jeffery Smith, an anti-GMO activist who heads the one-man Institute for Responsible Technology has alleged without evidence that Bt crops may cause sterility and cancer. The activist website GM Watch has argued they can potentially damage vital organs such as the liver and kidneys.

Citing a small Canadian study from 2011, Dr. Mercola said in 2013:

These shocking results also raise the frightening possibility that eating Bt corn might actually turn your intestinal flora into a sort of “living pesticide factory” essentially manufacturing Bt toxin from within your digestive system on a continuing basis through the transference of the Bt-producing gene to your gut bacteria.

Mercola said consumption of Bt crops may lead to “gastrointestinal problems autoimmune diseases [and] childhood learning disorders” and that rats fed a variety of Monsanto’s Bt corn experienced an immune response indicative of “….various disease states including cancer. There were also signs of liver and kidney toxicity.”

Many activists have argued that Bt crops can produce dangerous allergens. In 2018, GM Watch said that a “study performed in mice found that the GM Bt toxin Cry1Ac is immunogenic, allergenic, and able to induce anaphylaxis (a severe allergic response that can result in suffocation).”

Following the publication of that study, in November 2018, the European Food Safety Authority (EFSA) re-evaluated the allergenicity of the Cry1Ac protein at the request of the European Commission. The EFSA wrote that it stood by its seven previous evaluations published between 2010 and 2018, noting that “….other risk assessment bodies conclude that there are currently no indications of safety concern regarding Cry proteins in the context of the GM plants assessed.” In a follow-up study of Bt corn published in October 2020, the EFSA again concluded that the evidence “does not indicate any adverse effects on human and animal health or the environment arising from the cultivation of [Bt] maize …. Consequently, previous evaluations on the safety of maize MON 810 …. remain valid.”

The US Environmental Protection Agency has addressed these health concerns numerous times, beginning in 2001 when it issued a 50-page report evaluating the potential health risks, including allergenicity, posed by four Cry proteins commonly inserted into Bt corn and potato. It concluded: “None of the products registered at this time, all of which have tolerance exemptions for food use, show any characteristics of toxins or food allergens.”

The EPA noted additionally that consumers may be exposed to Bt toxins at very low levels through processed foods and drinking water, but “….a lack of mammalian toxicity and the digestibility of the plant-incorporated protectants has been demonstrated.”

Research has continued to affirm the relative safety of Bt crops. A January 2021 animal feeding study published by Nature, for example, found that Bt rice was unlikely to pose a greater health risk than its conventional counterpart. The study authors concluded that “GM Bt rice caused no obvious adverse effects on rats as evaluated by several biological parameters, including organ weight, serum chemistry, hematology, thyroidal and sexual hormones level, urinalysis, and histopathology.”

Have any of Earth’s creatures stopped evolving?

The goblin shark, duck-billed platypus, lungfish, tadpole shrimp, cockroach, coelacanths and the horseshoe crab — these creatures are famous in the world of biology, because they look as though they stopped evolving long ago. To use a term introduced by Charles Darwin in 1859, they are “living fossils”. And to their ranks, some have added humans, based on the idea that technology and modern medicine has, for all intents and purposes, eliminated natural selection by allowing most infants to live to reproductive age and pass on their genes.

It may be tempting to conclude from the sharks, horseshoe crabs and other creatures that evolution does often stop, and that Darwin’s “living fossil” term makes sense. But such a conclusion would be wrong.

Some argue that Darwin never intended the phrase to be used seriously. “The term is over-simplifying and leads to people believing that some things haven’t evolved, which is so wrong,” noted Africa Gómez, a biologist at Hull University in the United Kingdom who led a genetic analysis of the tadpole shrimp that in 2013 demonstrated that this “living fossil” is no fossil at all. “They have been evolving non-stop and speciating and radiating, so why on earth are they called living fossils?”

The same is true if one looks closely at the other “living fossils. They are all evolving, humans included. This is partially because there is more to biology than meets the eye, with things changing constantly at the cellular and biochemical level. But it’s also because Darwin’s watershed discovery, natural selection, known in popular language as ‘survival of the fittest’ is not the only evolutionary force.

goblin x
Goblin shark

Death prior to reaching reproductive age and lack of reproductive capability are not the only factors controlling biological change. Mathematics also plays a role here. Keep the environmental conditions the same around a biological group, remove selective pressures, but mess with the numbers of individuals in a population, and evolution still happens. Living things evolve, whether you see it on the surface or not.

There is more to evolution than meets the eye

Natural selection is the best-known evolutionary force and there are numerous examples of it operating quickly in recent times. To illustrate how natural selection works to biology students, teachers and textbook writers typically use examples of creatures with physically obvious changes. Among the more popular is the peppered moth of England. Early in the Industrial Revolution, peppered moths were light-colored. This camouflaged them against the white bark of birch trees around industrializing cities. Over a half-century, as the soot from burning coal in factories darkened the bark of the trees, the moths darkened their coloring too. It happened through natural selection. If you’re black moth on a white tree, you’re likely to get eaten by a bird, but when the trees darkened, now the dark moths had the advantage.

Another textbook example of natural selection is the high incidence of sickle cell disease in humans in places where malaria is endemic. Malaria and sickle cell disease are both deadly without modern medicine, but to have sickle cell disease one must carry two copies of a gene for defective hemoglobin, one from each parent. Having one defective copy, and one normal copy of the relevant gene however protects an individual from the parasite that causes malaria. But it causes no sickle cell crises, unless the individual engages in extreme exercise, or travels to high altitudes.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

The lesson from these examples  is that natural selection, due to environmental factors, leads to overt changes, but that a biological status quo should persist as long as everything in the environment remains copacetic.

Doing the math

In the mid to late 19th century, Darwin enjoyed a fair amount of publicity in England, and throughout the world. That publicity was well deserved, but Darwin’s contemporary, Gregor Mendel, was effectively invisible. Working from a monastery in a region that is now part of the Czech Republic, Mendel made fundamental discoveries about inheritance that eventually would put him in chapter one of every genetics text book. Darwin thought that children were a blend of their parents. But using plant models, Mendel figured out that this was not the case. Rather, he found that traits often disappeared from parents to offspring, only to reappear in future generations.

lungfish x
Lungfish

Early in the 20th century, a British researcher named Reginald Punnett was asking a lot of questions about inheritance to himself and also to his colleagues. Together with biologist William Bateson, Punnett would end up co-founding the Journal of Genetics in 1910. But Punnett also had a mathematician friend named Godfrey Hardy. Together, Punnett and Hardy used to play a lot of cricket, and one day on the cricket field Punnett mentioned that the problem of inheritance might be understood best in light of mathematics.

This led Hardy to publish what was called Hardy’s Law in June of 1908. It was an algebraic expression showing how the numbers and ratios of genes and traits should stay the same, or shift over time. It would have made Hardy the sole founder of population biology, if not for the fact that somebody else had already discovered it six months earlier. That man was Wilhelm Weinberg, a German physician who had derived a similar equation of population genetics in January, 1908. When this was realized, geneticists began referring to what we now call the Hardy-Weinberg Principle, and an equation by the same name.

The Hardy Weinberg Principle illustrates that there is an equilibrium that is maintained –the ratio’s alleles, alternate forms of each gene– remain the same, so there is no evolution — if selective pressure is removed, but there are a couple of other requirements. The population must be reproductively isolated (separated from other organisms that can produce offspring with it), and the population must be infinitely large.

If the number of individuals in an isolated, successful population is not infinity, then a force called genetic drift comes into play. Like the increasing chances of getting more heads or more tails as the number of random coin tosses is decreased, genetic drift increases as the size of a population decreases. Other things that happen in nature are founder effects and bottlenecks, both of which you can equate to drawing a handful of gumballs from a jar and ending up holding gumballs with a ratio of colors that differs from the color ratio in the jar, due to the randomness of sampling.

crab x
Horseshoe crab

With a founder effect, a small group of individual’s gets isolated, and due to chance the ratio of alleles for various genes is different from the mother population. The same is true in a bottleneck effect, which happens when the bulk of a population gets killed off, and due to to chance, the ratio of different alleles in the surviving gene pool is different from what it was in the mother population.

Whereas founder effects and bottlenecks remove diversity from the gene pool of a population, the opposite can happen when migration and other factors bring populations together. Thus, while in the case of humanity, modern medicine and other technologies may indeed be reducing the impact of natural selection, migration and founder effects has been playing a major roles as transportation and other technologies have developed. And these phenomena may play a still more influential role if human colonization of the Moon, Mars, or free space becomes reality. In nature all phenomena that change the gene pool operate in concert, affecting the course of evolution, whether in tadpole shrimp, lungfish, or humans.

David Warmflash is an astrobiologist, physician and science writer. BIO. Follow him on Twitter @CosmicEvolution

This article was originally published at the GLP on November 27, 2017.

We might be able to protect ourselves against future pandemics by gene editing embryos

Hollywood blockbusters such as X-menGattaca and Jurassic World have explored the intriguing concept of “germline genome editing” – a biomolecular technique that can alter the DNA of sperm, eggs or embryos. If you remove a gene that causes a certain disease in an embryo, not only will the baby be free of the disease when born – so will its descendants.

The technique is, however, controversial – we can’t be sure how a child with an altered genome will develop over a lifetime. But with the COVID-19 pandemic showing just how vulnerable human beings are to disease, is it time to consider moving ahead with it more quickly?

There’s now good evidence that the technique works, with research normally carried out on unviable embryos that will never result in a living baby. But in 2018, Chinese scientist He Jiankui claimed that the first gene-edited babies had indeed been born – to the universal shock, criticism and intrigue of the scientific community.

He Jiankui. Credit: Bloomberg

This human germline genome editing (hGGe) was performed using the Nobel-prize winning CRISPR system, a type of molecular scissors that can cut and alter the genome at a precise location. Researchers and policy makers in the fertility and embryology space agree that it is a matter of “when” and not “if” hGGe technologies will become available to the general public.

In 2016, the UK became the first country in the world to formally permit “three-parent babies” using a genetic technique called mitochondrial replacement therapy – replacing unhealthy mitochondria (a part of the cell that provides energy) with healthy ones from a donor.

COVID-19 protection

Scientists are now discussing genome editing in the light of the COVID-19 pandemic. For example, one could use CRISPR to disable coronaviruses by scrambling their genetic code. But we could also edit people’s genes to make them more resistant to infection – for example by targeting “T cells”, which are central in the body’s immune response. There are already CRISPR clinical trials underway that look to genome edit T cells in cancer patients to improve anti-tumour immunity (T cells attacking the tumour).

This type of gene editing differs to germline editing as it occurs in non-reproductive cells, meaning genetic changes are not heritable. In the long term, however, it may be more effective to improve T-cell responses using germline editing.

It’s easy to see the allure. The pandemic has uncovered the brutal reality that the majority of countries across the world are completely ill equipped to deal with sudden shocks to their, often, already overstretched healthcare systems. Significantly, the healthcare impacts are not only felt on COVID patients. Many cancer patients, for instance, have struggled to access treatments or diagnosis appointments in a timely manner during the pandemic.

This also raises the possibility of using hGGe techniques to tackle serious diseases such as cancer to protect healthcare systems against future pandemics. We already have a wealth of information that suggests certain gene mutations, such as those in the BRCA2 gene in women, increase the probability of cancer development. These disease genetic hotspots provide potential targets for hGGe therapy.

Chemistry Nobel Prize award recipients Jennifer A. Doudna and Emmanuelle Charpentier. Credit: J.L. Cereijido/EPA

Furthermore, healthcare costs for diseases such as cancer will continue to rise as drug therapies continue to become more personalised and targeted. At this point, wouldn’t gene editing be simpler and cheaper?

Climate change and malaria

As we approach the mezzo point of the 21st century, it is fair to say that COVID-19 could prove to be just the start of a string of international health crises that we encounter. A recent report by the UN Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) emphasised the clear connection between global pandemics and the loss of biodiversity and climate change. Importantly, the report delivers the grim future prediction of more frequent pandemics, which may well be deadlier and more devastating than COVID-19.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

It isn’t just more viral pandemics that we might have to face in the future. As our global climate changes, so will the transmission rates of other diseases such as malaria. If malaria begins presenting itself in locations with unprepared healthcare systems, the impacts on healthcare provision could be overwhelming.

Interestingly, there is a way to protect people from malaria – introducing a single faulty gene for the sickle cell anaemia. One copy of this faulty gene gives you a level of protection against malaria. But if two people with a single faulty gene have a baby, the child could develop sickle cell anaemia. This shows just how complicated gene editing can be – you can edit genes to protect a population against one disease, but potentially causing trouble in other ways.

3d render of sickle cell anemia blood cells. Credit: Meletios Verras/iStock

Despite the first hGGe humans already having been born, the reality is that the technique won’t be entering our mainstream lives any time soon. The UK Royal Society recently stated that heritable genome editing is not ready to be tried in humans safely, although it has urged that if countries do approve hGGe treatment practices, it should focus on specific diseases that are caused by single specific genes, such as sickle cell anaemia and cystic fibrosis. But, as we have seen, it may not make sense to edit out the former in countries with high rates of malaria.

Other major challenges for researchers is unintended genetic modifications at specific sites of the genome which this could lead to a host of further complications to the genome network. The equitable access of treatment provides another sticking point. How would hGGe be regulated and paid for?

The world is not currently ready for hGGe technologies and any progress in this field is likely to occur at a very incremental pace. That being said, this technology will eventually come to feature in humanity for disease prevention. The big question is simply “when?”. Perhaps the answer depends on the severity and frequency of future health crises.

Yusef Paolo Rabiah is a PhD Candidate at UCL Science, Technology, Engineering and Public Policy. Yusef’s PhD is focused on developing public policy frameworks for the introduction of germline genome editing technologies into the UK. Find Yusef on Twitter @PaoloYusef

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

bering strait migration wide cc ed e b e c c cd f c acaee c s c

‘New story unfolding’: Ancient finger bones found in Asia force a rethinking of human migration

The Nefud Desert is a desolate area of orange and yellow sand dunes. It covers approximately 25,000 square miles of the Arabian Peninsula. But tens of thousands of years ago, this area was a lush land of lakes, with a climate that may have been kinder to human life.

On a January afternoon in 2016, an international team of archaeologists and paleontologists was studying the surface of one ancient lakebed at a site called Al Wusta in the Nefud’s landscape of sand and gravel. Their eyes were peeled for fossils, bits of stone tools, and any other signs that might remain from the region’s once-verdant past.

Suddenly, Iyad Zalmout, a paleontologist working for the Saudi Geological Survey, spotted what looked like a bone. With small picks and brushes, he and his colleagues removed the find from the ground.

We knew it [was] important,” Zalmout recalled in an email. It was the first direct evidence of any large primate or hominid life in the area. In 2018, lab tests revealed that this specimen was a finger bone from an anatomically modern human who would have lived at least 86,000 years ago.

Prior to this Al Wusta discovery, evidence in the form of stone tools had suggested some human presence in the Nefud between 55,000 and 125,000 years ago. To anthropologists, “human” and “hominin” can mean any of a number of species closely related to our own. The finger bone was the oldest Homo sapiens find in the region.

original
Archaeologists found this Homo sapiens finger bone, dating back some 86,000 years, at a site called Al Wusta in Saudi Arabia. Credit: Ian Cartwright/Max Planck Institute for the Science of Human History

The bone’s dating contradicts a well-established narrative in the scientific community. Findings, particularly from the area of modern-day Israel, Jordan, and Lebanon, known as the Levant region, have led to the understanding that H. sapiens first made their way out of Africa no earlier than 120,000 years ago, likely migrating north along the Mediterranean coast. These people settled in the Levant and their descendants—or those from a subsequent early human migration out of Africa—traveled into Europe tens of thousands of years later.

Only later, that story goes, did they journey into parts of Asia, such as Saudi Arabia. By some estimates, then, anatomically modern humans would not have been in what is now Al Wusta until about 50,000 years ago.

The fingerbone, then, adds a twist to the tale of how and when our species left the African continent and, with many starts and stops, populated much of the rest of the earth. A new crop of discoveries, particularly from Asia, suggest that modern humans first left Africa some 200,000 years ago, taking multiple different routes.

No longer is the Levant necessarily central—and points east could have had unforeseen importance to early human migrations. As anthropologist Michael Petraglia, of the Max Planck Institute for the Science of Human History, puts it, “A new story is unfolding.”

These findings could shed light on big unanswered questions, such as why humans made these migrations, what past environmental conditions were like, and how H. sapiens interacted with other hominins. But the changing narrative also underscores how much of our knowledge comes from—and is limited by—where archaeologists and other researchers have worked. The geographic emphasis has long been influenced not by science but by access, funding, and tradition.

The first hint that the long-held story of human journeys out of Africa had missed something critical came from within the well-studied Levant region, in the Misliya Cave in Israel. In 2018, archaeologists revealed that they had found a human jawbone in this cave.

The bone—dated with three different methods in the course of a decadelong investigation—is between 177,000 and 194,000 years old, pushing back the timeline of when humans first lived here by at least 50,000 years. And older stone tools found in layers beneath the jaw suggest that humans could have been present in this area even longer.

It’s possible, then, that humans left Africa and journeyed into the Levant—and elsewhere—even earlier than the date of this jawbone. This line of thinking gained still more traction in July 2019, when a group of scholars published novel findings on a skull discovered in Greece in the 1970s. That fossil, the new work suggests, is human and more than 210,000 years old.

But in addition to this changing timeline, researchers are rethinking where humans traveled when they left Africa. The Al Wusta find is just one example.

x
Researchers have discovered that these H. sapiens teeth, found in China, are at least 85,000 years old. Credit: S. Xing and X-J. Wu

In 2015, researchers in China published their finding of 47 human teeth, dating between 85,000 and 120,000 years old, in a cave in Hunan province. Until this discovery, the oldest modern human fossils found in southern Asia were only about 45,000 years old.

These new findings “oblige [us] to rethink when and the way we dispersed,” says forensic anthropologist María Martinón-Torres, director of the National Research Center on Human Evolution in Burgos, Spain, and a member of the team that discovered and studied the teeth. She adds: “There may be more than one ‘out of Africa’ dispersal … humans, like any other animal, may have expanded as far as there was not any barrier, ecological or geographic, that prevented them from doing so.”

In 2018, researchers in India published on the discovery of a collection of advanced stone tools. They say this find indicates a hominin presence stretching back at least 170,000 years—millennia earlier than previous research suggested. And some evidence suggests early humans may have headed directly toward Asia by crossing from Africa over the Arabian Peninsula, altogether bypassing the Levant, where so much of the earliest evidence of humans outside Africa has come from.

Acombination of new discoveries, then, has shifted understandings of the timing, routes, and geographic range associated with H. sapiens’ dispersal out of Africa. But for archaeologists, the finds also flag a blind spot of sorts. As Martinón-Torres says, “These findings are also a big warning note regarding Asia.”

Indeed, there is growing awareness of the need to expand the geographic scope of paleontology and archaeology related to early human migrations and evolution. “For a long time,” Martinón-Torres adds, “Asia was considered like a dead end with a secondary role in the mainstream of human evolution.”

There is a huge bias in archaeological fieldwork and where it’s occurring, and our theories on human evolution are built on these geographic biases,” says Petraglia, who with Zalmout and colleagues at the Saudi Commission for Tourism and National Heritage found the Al Wusta fingerbone.

Several factors have contributed to this bias, explains archaeologist and writer Nadia Durrani, who co-authored Archaeology: A Brief Introduction with anthropologist Brian Fagan. Archaeology began more than a century ago “as a Western scientific discipline,” she says.

The first archaeologists, who were European and American, focused mainly on Mediterranean Europe and lands mentioned in the Bible, including modern-day Iran, Iraq, Egypt, Israel, and the West Bank. “People were interested in the Bible and classical issues,” including ancient Greece and Rome, Durrani says. As archaeologists made discoveries in those areas, the interest in those regions grew, and institutions sprouted up in those same places, which in turn fueled further research there.

Countries where paleoanthropological research has been conducted for many decades are more likely to have important finds that are also well-known and valued by the people themselves,” says Katerina Harvati, director of paleoanthropology at the University of Tübingen. “And therefore, [they] are likely to have more funding opportunities.”

The opposite is also true. It can be difficult to convince colleagues or prospective funders of a place’s potential when it has been little explored and lacks certain forms of infrastructure. Environmental and natural barriers can come into play. Petraglia points out that working in areas that haven’t been well-explored can require starting from the beginning with tasks like surveys and mapping, and there is often no previous work to draw on.

For that matter, political issues may help or hinder archaeologists. Durrani participated in fieldwork in Yemen in the 1990s, for example, and later led tours at archaeological sites there. This work came to a halt in 2008 due to political instability in the area. Violence and conflicts pose serious barriers for access, she says.

the nefud desert
Archaeologists survey the Al Wusta dig site. Credit: Klint Janulis

The new findings indicate that attitudes toward Asia are changing, with more and more attention turning to this region. The shift coincides with economic and political changes. In the past two decades, China has been inviting scholarship into previously unstudied regions. More recently, Saudi Arabia has been opening up certain sites for archaeology and tourism.

Over time, access and conditions will, scientists hope, further improve. In the interim, this research reveals that anatomically modern humans left Africa earlier than expected and traveled south, along the Arabian Peninsula, in addition to north.

However, some of these finds have drawn skepticism. Jeffrey Schwartz, professor emeritus at the University of Pittsburgh, cautions against drawing dramatic conclusions from the findings. “I think we are calling too many things H. sapiens,” he says.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

By contrast, Mina Weinstein-Evron, an archaeologist at Haifa University who co-discovered the Misliya Cave jawbone suspects that the recent findings are H. sapiens but agrees that the story of anatomically modern human dispersal is still far from clear. “We know nothing. We have a dot of evidence here and a dot of evidence there,” she says. “And then we use these big words like ‘migration’ and ‘dispersal.’ We talk as if they bought a ticket. But they didn’t know where they were going. For them it was probably not even a movement, maybe it was 10 kilometers per generation.”

What’s more, some genetic findings hint that even if humans traveled out of Africa and into Asia earlier than previously thought, it’s possible these early human migrations were ultimately unsuccessful from an evolutionary perspective. According to conclusions from three different groups of scientists who published in Nature in 2016, the DNA of Eurasians diverged from that of Africans 60,000 to 80,000 years ago. In other words, all humans alive today are descendants of H. sapiens who migrated out of Africa within that window—as well as other hominins, such as Neanderthals.

ev kaozwoaacwil
Scholars are recognizing that H. sapiens may have taken many different routes out of Africa, shown here in red. Credit: Catherine Gilman

Nonetheless, the earlier migrations are intriguing, says Luca Pagani, a biological anthropologist who authored one of the Nature articles. “Although it’s not going to change our idea of which migrations were a success, it does show a richer variety of attempts at dispersal,” he says, and that is an essential part of the story of early modern humans.

Indeed, the reasons certain early human migrations failed could illuminate major questions in archaeology. Martinón-Torres and her colleagues working in China, for example, have posited that early modern humans may have been in competition with Neanderthals or other hominins, which could have influenced their movements.

Petraglia, meanwhile, suspects early modern humans may have thrived in the Arabian site until water disappeared as the desert expanded. “If you want to know how climate change may affect us one day, well, we’ve got a whole story here about the effects of climate change on human populations,” he says. In short, the descendants of these intrepid humans may not have survived, but their stories could still guide us into the future.

Sara Toth Stub is a journalist living in Jerusalem and writing mainly about religion, business, travel, and archaeology. She has contributed to The New York Times, The Washington Post, US News & World Report, Archaeology magazine, National Public Radio, and other media outlets. From 2006 to 2015, she wrote about Israel’s economy for Dow Jones Newswires and The Wall Street Journal. Follow her on Twitter @saratothstub

A version of this article was originally published at Sapiens and has been republished here with permission. Sapiens can be found on Twitter @SAPIENS_org

The glyphosate debacle: How a misleading study about the alleged risks of the weedkiller Roundup and gullible reporters helped fuel a cancer scare

As biotech giant Bayer prepares to spend $10 billion settling thousands of lawsuits alleging its weedkiller Roundup (and its active ingredient glyphosate) causes cancer, we’re forced to address a crucial question: how does an herbicide deemed safe by regulators and scientists the world over become the whipping boy of tort lawyers and environmental groups with an ideological ax to grind? 

The answer to that question is complex and difficult to address in a single article, but there are two key factors that helped turn an innocuous chemical into a corporate scandal: the publication of low-quality studies asserting, counter to the expert consensus, that glyphosate poses a serious cancer risk, and gullible media outlets that uncritically reported this research to their audiences. This combination gave lawyers and activists the academic ammunition they needed to pursue litigation and build public support for the false narrative that Monsanto/Bayer ignored evidence of glyphosate’s cancer risk to boost its bottom line. 

Using a highly-publicized study from the glyphosate debacle as an example, let’s examine how questionable science slips under the radar of peer review. Although the issues involved will appear to be technical and forbidding, in actuality they can be explained and made accessible. Furthermore, these issues are important for two reasons. First, what is at stake is the availability of a useful agricultural product that farmers value and scientists widely agree is safe and relatively environmentally benign. Second, if there are egregious errors and biases in a paper that has received widespread coverage and has been held up as strong evidence on a high-stakes question, it is important for the public to understand where the errors lie, how the paper could have been published, and how it could have had such an enormous impact. 

The Zhang paper

In February 2019, scientists at several universities published a paper claiming that people with heavy exposure to glyphosate had a 41 percent increased risk of non-Hodgkin’s lymphoma (NHL). (Three of the authors had served on the U.S. EPA’s 2016 scientific advisory panel on glyphosate, but dissented from the EPA’s conclusion that glyphosate is not carcinogenic). 

Arriving at a time when high-profile tort cases against Monsanto/Bayer were being litigated in the San Francisco Bay area, what I’ll refer to as the “Zhang paper,” after its first author, appeared to deliver a strong scientific analysis supporting an association of exposure to glyphosate and risk of NHL. After all, it was a “meta-analysis” that had the appearance of a high-powered and rigorous assessment of the available human evidence on the question.

Aside from my critiques, published by the GLP and Forbes just days after the Zhang paper went online, and that of another scientist writing at Forbes, the results of the meta-analysis were widely reported at face value.  The Guardian, CNN, Reuters Health, Mother Jones, Yahoo! News, the PBS NewsHour, and many other news outlets echoed the paper’s conclusion that there was a “compelling link” between glyphosate exposure and risk of NHL.

capture
Headline from the Guardian’s report on the Zhang paper. Credit: Guardian

Several academics seemed to approve of the paper, one commenting that the study was “well-conducted,” the other providing testimony for the plaintiffs in one of the Bayer cases. 

In August 2019, NPR’s Pasadena affiliate, KPCC, arranged to have a joint interview of the senior author on the paper, Dr. Lianne Sheppard of the University of Washington, and me.  I was looking forward to a spirited debate over her study’s merits, but, unfortunately, two hours before air time Dr. Sheppard pulled out due to “other obligations.”

In February 2020 —a full year after my critique—Dr. Sheppard, wrote an article for Forbes, defending her study. However, rather than addressing any of my criticisms, she implied that I have my own biases and conflicts-of-interest:

“Among scientists who leveled this criticism of our work are Dr. Geoffrey Kabat and Dr. Steven Salzberg in pieces originally published at Forbes. Dr. Kabat’s piece was removed for failure to meet Forbes’ editorial standards, and Dr. Salzberg’s piece referred to some of Dr. Kabat’s analysis without initially acknowledging said retraction (although the piece has since been updated to reflect this). Their arguments echo Bayer’s February 13, 2019 media statement that claimed our paper cherry-picked data.”

But as I explained in a second long piece for the Genetic Literacy Project, this was a rather lame and contorted defense, which failed to explain why my Forbes analysis was removed:

“Dr. Sheppard’s contention that my critique of her study was taken down because I ‘failed to adhere to Forbes’ editorial standards’ is laughable …. What happened, in fact, is that anti-pesticide, anti-GMO, anti-modern agriculture activist Carey Gillam raised a fuss with the editors, and they spinelessly took down the article and severed my connection to Forbes, without any discussion.”

A brass-tacks analysis

This is where things stood until mid-January 2021, when two colleagues and I published an article in the journal Cancer Causes and Control, carefully detailing the deficiencies in the Zhang paper.

In what follows, I’ll give a brass-tacks account of the meta-analysis; the choices and claims made by Zhang et al.; how they justified their choices and claims; and, finally, what the evidence actually says about the link between glyphosate and cancer.  In part 2 of this series, I will comment on other aspects of this largely unchallenged paper and discuss what its publication and reception say about the science pertaining to putative environmental risks and how the science is disseminated to the public. 

Defining key terms

Case-control study: study in which cases of a disease are enrolled following diagnosis and a suitable comparison group that is free of the disease (“control group”) is enrolled. Cases
and controls are interviewed about their past exposures, sociodemographic factors, etc.

Odds ratio: measure of risk derived from a case-control study – the ratio of the odds of having the exposure among cases to the odds of having the exposure among controls.

Prospective study (or cohort study): a cohort of healthy individuals is enrolled and followed for a period of years. Information about behaviors and exposures is obtained at
enrollment. Health events occurring in the cohort over follow-up are monitored.

Recall bias refers to the fact that, because they are aware of their diagnosis, cases may ruminate more about their past exposures and, as result, may report past exposures differently from controls, independent of their actual exposure.

Relative risk: measure of risk derived from a prospective study – the ratio of the risk of disease in the exposed to the risk among the unexposed.

Selection bias refers to how well the cases and controls enrolled into your study are reflective of all cases and all controls in the general population. For example, in the past, if researchers selected controls by calling people with telephones, this method of selecting controls could bias the results by excluding lower income people from the control group.

Statistical significance: determination as to whether a given study result is sufficiently robust as to be unlikely to be due to chance.

At the outset, we should note that the term “non-Hodgkin’s lymphoma” is a basket term designating a variety of lymphomas (those that are not Hodgkin’s lymphoma), rather than a single entity. Nevertheless, NHL is rare, with an occurrence of about 20 cases per hundred thousand population per year.  Also, the incidence of NHL has been flat in the U.S. for the past thirty years, a period which has seen a 15-fold increase in the spraying of glyphosate.

Zhang et al. looked for studies that examined the association of exposure to glyphosate and risk of NHL. They found six studies. These were the large, prospective cohort Agricultural Health Study (AHS) and five case-control studies.  The researchers carried out a meta-analysis of the relative risk estimates from these studies.

capture

Meta-analysis is a technique for combining a number of individual studies in order to obtain a more precise and stable estimate of the association you are interested in. It can be thought of as taking a weighted average of the results of the individual studies. The overall risk estimate from a meta-analysis is referred to as the “summary relative risk.”

Meta-analysis was first used to combine the results of small clinical trials in order to obtain a firmer judgment about the effect of a treatment, such as prescribing low-dose aspirin to prevent heart attacks. However, it has become popular to use it to summarize the results of observational (epidemiological) studies, but the latter are often very heterogeneous and lack the protection against bias afforded by randomization.

It is well-recognized that combining the results of studies that are vastly different in quality can lead to spurious results. 

Individual studies

Let’s look at the six studies that Zhang et al. combined. The AHS is a prospective study of roughly 54,000 pesticide applicators who were asked about their use of specific pesticides by questionnaire during enrollment in 1993-1997 and again in 1999-2005. Importantly, greater than 80% of the cohort had used glyphosate. Over 20 years of follow-up, 575 cases of NHL were diagnosed in the cohort.

The substantial number of cases and detailed information on glyphosate exposure allowed the researchers to divide the cohort into five exposure levels: no exposure and four increasing levels of exposure to glyphosate. Furthermore, because exposure information was obtained before the development of disease it was not subject to recall bias that is a problem in case-control studies.

Screen Shot at AM

In contrast, the five case-control studies were smaller and had more limited information about exposure. The fact that the case-control studies were conducted among the general population means that exposure to any particular occupational or environmental agent is almost always going to be rare. Of the total 2,836 NHL cases in the five case-control studies included in the Zhang et al. analysis, only 136 (or 5%) were exposed to glyphosate. Consequently, the risk estimates were imprecise and uncertain. In addition, there is published evidence from the EPA and other researchers indicating that several of these studies are subject to recall bias and, in some cases, selection bias. 

What did the Agricultural Health Study show?

The AHS reported no association between glyphosate exposure and risk of NHL (or with any of over twenty types of cancer). The researchers reported five different analyses, each showing the risk at four exposure levels compared to people with no exposure.  The five analyses were for different “latency periods” (latency refers to the time interval between first exposure and diagnosis of NHL), denoted as lag periods of 0, 5, 10, 15, and 20 years. 

None of the results of any of the five analyses showed any hint of an increased risk associated with glyphosate exposure. The risk estimate for the highest exposure quartile (Q4) in the five analyses was: 0.87, 0.87, 0.83, 0.94, and 1.12.  Essentially, the risk of the highest exposure group in the five analyses was indistinguishable from 1–or no excess risk. Furthermore, there was no evidence of a trend toward increasing risk with increasing exposure level. 

The table shows the results of the no-lag and the 20-year lag analyses. All risk estimates in the upper panel are slightly below 1.0 but not statistically significant. And all but one of the risk estimates in the lower panel are slightly greater than 1.0, but, again, not statistically significant. As can be seen, there is no trend toward increasing risk with increasing exposure. In fact, in the lower panel, the lowest exposure group (Q1) has the largest relative risk (1.22), contradicting Zhang’s hypothesis that the highest quartile should show the greatest risk. 

Risk of NHL by level of glyphosate exposure for no latency and 20-year latency. Source: Andreotti et al. Glyphosate Use and Cancer Incidence in the Agricultural Health Study. Journal of the National Cancer Institute 2018.

Zhang selected the relative risk of 1.12 for the highest exposure group in the 20-year lagged analysis to include in the meta-analysis. Because of the large size of the AHS, this number, even though small in absolute magnitude, made a big difference compared to selecting the 0-lag relative risk of 0.87, which the AHS researchers reported as their main result, or any of the other three risk estimates. 

On the whole, with one exception, the risk estimates from the five case-control studies were around 2.0 (the exception showed an odds ratio of 1.0, or no increased risk). When combined with the 1.12 from the AHS—remember that a meta-analysis involves essentially taking a weighted average—the resulting summary relative risk was 1.41 and just barely statistically significant. This was the result that Zhang et al. highlighted in their abstract, declaring that it suggested a “compelling link” between glyphosate exposure and risk of NHL. 

Ignoring unhelpful evidence

The first thing to point out is that Zhang et al. simply ignored the four other risk estimates reported in the AHS paper. This is particularly striking because the four other estimates were below 1 and would have resulted in a lower and, in all but one case, a non-statistically significant summary relative risk (as demonstrated in our paper). 

How did Zhang et al. justify their selection of the 20-year lagged relative risk of 1.12? The authors stated that, “Our a priori hypothesis is that the highest biologically relevant exposure to GBHs [glyphosate-based herbicides], i.e., higher levels, longer durations, and/or with sufficient lag and latency, will lead to increased risk of NHL in humans.”

But no matter how invested Zhang et al. were in their hypothesis, they were not justified in passing over in silence the four other estimates. In other words, their a priori hypothesis could only be sustained by ignoring estimates which were unhelpful. Furthermore, their total neglect of the other estimates is bizarre in light of this statement from their paper: 

“We conducted several sensitivity analyses to evaluate the impact of excluding or including different studies as well as using different RRs/ORs [relative risks/odds ratios] from original studies (Tables 5 and 6).” 

This applies to the case-control studies but not to the AHS. An even-handed analysis would have looked at all five risk estimates in order to accurately represent the findings of the AHS. 

Much as the authors proclaim their a priori hypothesis, there is a fatal problem with it. As pointed out by the U.S. EPA and by my colleagues and myself, Zhang’s hypothesis is contradicted by what is by far the largest and highest quality study—the AHS.  As noted above, in none of the five analyses was there any hint that the highest exposure group had an increased risk of NHL. And there was no evidence of an increasing trend with increasing intensity of exposure. 

Because of its size and detailed exposure information, the AHS is the only one of the six studies that can address Zhang’s a priori hypothesis. If the best study on the topic provided no evidence in support of the hypothesis, this ought to have been acknowledged.

It’s not good enough to ignore evidence that contradicts your hypothesis and then go on to combine numbers in such a way as to generate support for the hypothesis. This is simply indulging in circular reasoning. Rather than accepting the 41 percent increase in risk, the researchers should have vetted it to see if it stood up to scrutiny.  

What about the focus on the longest latency period, of 20 years?  The authors justified this choice with reference to a paper by Weisenberg, which claimed that the median latency periods could be 15-20 years for NHL. However, as we pointed out in our paper, the Weisenberg data were not based on NHL data but, rather, were hypothesized based on an early estimate of the latency period for acute leukemia following exposure to benzene.  

A review from the CDC concluded that estimates of latency periods for lymphoma “range from 2 to 10 years.” Not only was Zhang’s choice of 20-year latency not justified by the literature, the authors themselves called it into question later in their paper: “the latency for NHL is uncertain and could be anywhere from 2 years to greater than 15 years.” All the more reason to have examined all five analyses in the AHS!

Thus, neither the claim that an effect of glyphosate exposure should be seen in the highest exposure group, nor the long latency period for NHL posited by Zhang et al. is supported by the peer-reviewed literature.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Rather than focusing on the highest exposure group, the EPA and our group conducted meta-analyses focused on the risk of ever exposure to glyphosate versus no exposure. The two analyses differed somewhat in their selection of estimates from the available studies. Both studies found no association of ever exposure to glyphosate and risk of NHL. While these analyses examined a somewhat different question from that posed by Zhang, unlike Zhang, they made use of all of the data and justified their selection of estimates. If heavy glyphosate exposure was associated with increased risk of NHL, one could reasonably expect to see some indication of an increased risk in those who are ever exposed. 

In sum

To recapitulate, Zhang et al. carried out a meta-analysis that involved combining studies of very different quality—something that is cautioned against in the Cochrane Handbook—a canonical reference work for the conduct of studies. Their analysis resulted in a small increased risk of NHL, and they concluded that this demonstrated a “compelling link” between glyphosate exposure and risk of NHL.  

But in their analysis, they excluded from consideration the bulk of the results from the Agricultural Health Study in order to select the highest risk estimate that, together with the likely biased risk estimates from the case-control studies, resulted in a just-barely statistically significant 41 percent increase in risk.  Rather than providing compelling evidence, it is more likely that the Zhang et al. result represents the compounding of a number of biases. 

I have only focused here on the most glaring errors in the Zhang paper. In part two of this series, I will examine some of the important questions raised by publication of their paper:

  • Are there other errors and indications of bias in the paper?
  • How could the paper have passed rigorous peer review?
  • How could the glaring bias in the paper have escaped notice?
  • How is one to explain that epidemiologists at top universities could publish this paper?
  • How do the authors respond to criticism of their work?
  • What does the publication of this paper tell us about scholarly standards in the area of environmental epidemiology?

Geoffrey Kabat is a cancer epidemiologist and the author of Getting Risk Right: Understanding the Science of Elusive Health Risks. Find Geoffrey on Twitter @GeoKabat

What the heck are ‘anti-nutrients’? Despite the scary-sounding name, they may have important health benefits

Maybe you’re trying to eat healthier these days, aiming to get enough of the good stuff and limit the less-good stuff. You’re paying attention to things like fiber and fat and vitamins… and anti-nutrients?

What the heck are anti-nutrients and are they something you need to be concerned about in your diet?

Let me, as a public health nutrition researcher, reassure you that anti-nutrients aren’t the evil nemesis of all the nutritious foods you eat. As long as you’re consuming a balanced and varied diet, anti-nutrients are not a concern. In fact, scientists are realizing they actually have many health benefits.

Nutrients get absorbed into your bloodstream – or not – as digestion occurs in your small intestine. Credit: Sebastian Kaulitzki/Science Photo Library/Getty Images

What are anti-nutrients?

Anti-nutrients are substances that naturally occur in plant and animal foods.

The name comes from how they function in your body once you eat them. They block or interfere with how your body absorbs other nutrients out of your gut and into your bloodstream so you can then use them. Thus, anti-nutrients may decrease the amount of nutrients you actually get from your food. They most commonly interfere with the absorption of calcium, iron, potassium, magnesium and zinc.

Plants evolved these compounds as a defensive mechanism against insects, parasites, bacteria and fungi. For example, some anti-nutrients can cause a food to taste bitter; animals won’t want to eat it, leaving the seed, for instance, to provide nourishment for future seedlings. Some anti-nutrients block the digestion of seeds that are eaten. The seeds disperse when they come out the other end in the animal’s fecal matter and can go on to grow new plants. Both of these survival tactics help the plant species grow and spread.

In terms of foods that people eat, you’ll most commonly find anti-nutrients naturally occurring in whole grains and legumes.

Time for an image makeover as health enhancers

Despite sounding scary, studies show that anti-nutrients are not of concern unless consumed in ultra, unrealistically high amounts – and they have numerous health benefits.

Anti-nutrients are currently undergoing a change in image very similar to the one dietary fiber experienced. At one point, scientists thought dietary fiber was bad for people. Since fiber could bind to nutrients and pull them out of the digestive tract in poop, it seemed like something to avoid. To address this perceived issue, grain processing in the late 1800s removed fiber from foods.

But now scientists know that dietary fiber is incredibly important and encourage its consumption. Eating plenty of fiber lowers the risks of obesity, high blood pressure, heart disease, stroke, diabetes and some gastrointestinal diseases.

In the same way, rather than something to avoid, many anti-nutrients are now considered health-promoting nutraceuticals and functional foods due to their numerous benefits. Here’s an introduction to some of the most frequently eaten anti-nutrients that come with benefits:

Oxalates are one of the few anti-nutrients with mostly negative impacts on the body. They are found in lots of common foods, including legumes, beets, berries, cranberries, oranges, chocolate, tofu, wheat bran, soda, coffee, tea, beer, dark green vegetables and sweet potatoes. The negative impacts of oxalates include binding to calcium in the digestive tract and removing it from the body in bowel movements. Oxalates can also increase the risk of kidney stones in some people.

Lots of healthy, tasty foods come with the added benefits of anti-nutrients. Credit: Joan Ransley/Moment/Getty Images

Fitting anti-nutrients into a healthy diet

Overall, comparing the benefits to the drawbacks, anti-nutrient pros actually outweigh the cons. The healthy foods that contain them – mainly fruits, vegetables, whole grains and legumes – should be encouraged not avoided.

Anti-nutrients become a concern only if these foods are consumed in ultra-high amounts, which is very unlikely for most adults and children in the U.S. Additionally, a large proportion of anti-nutrients are removed or lost from foods people eat as they’re processed and cooked, especially if soaking, blanching, boiling or other high-heat processes are involved.

Vegetarians and vegans may be at higher risk of negative effects from anti-nutrients because their diet relies heavily on fruits, vegetables, whole grains and legumes. But these plant-based diets are still among the healthiest and are associated with reduced risk of cardiovascular disease, obesity, diabetes and numerous types of cancers.

Follow the latest news and policy debates on agricultural biotech and biomedicine? Subscribe to our newsletter.

Vegetarians and vegans can take a few steps to help counteract anti-nutrients’ effects on their absorption of particular nutrients:

  • Pair high iron and zinc foods with foods high in vitamin C (examples: veggie meatballs with tomato sauce, tomato-based chili with beans).
  • Soak legumes before cooking.
  • Time dairy intake such that it is not always paired with high oxalate foods.
  • Purchase dairy products that are fortified with calcium.
  • Consider a multivitamin-mineral supplement with about 100% of the daily recommended dose of nutrients (check the nutrition facts panel) as nutrition insurance if you are worried, but be sure to talk to your doctor first.

Jill Joyce is an Assistant Professor of Public Health Nutrition at Oklahoma State University. 

A version of this article was originally posted at the Conversation and has been reposted here with permission. The Conversation can be found on Twitter @ConversationUS

glp menu logo outlined

Newsletter Subscription

Optional. Mail on special occasions.