For anyone who’s ever worked in a large organization, this kind of message will be depressingly familiar: “Do you have capacity to cascade the following information to your team? Please escalate inquiries through line management channels.”
In plain English, this (real-life example) simply means: “Pass this on. Any questions, just ask.”
So why not simply say so? And what on earth has this to do with genetics?
The late philosopher Denis Dutton provides a potential answer to both questions. In addition to writing The Art Instinct, a Darwinian analysis of aesthetics, Dutton also ran a Bad Writing Contest that lampooned “the most egregious examples of awkward, jargon-clogged academic prose” produced in any particular year. (Sadly, this competition — to borrow a phrase from one of its winners — now has an “absentation of actuality”. That is, it no longer exists.)
And it is Dutton’s explanation for why so many academics indulge in incomprehensibly obscure writing that is of relevance here: that is, that it’s a smokescreen, deliberately designed to hide trite or lightweight arguments within profound-sounding text. As Dutton says of one year’s 94-word single sentence winning entry, “To ask what [it] means is to miss the point. [It] beats readers into submission and instructs them that they are in the presence of a great and deep mind. Actual communication has nothing to do with it.”
Or, more succinctly, it’s simply “showing off”.
Much the same could be said of the corporate communication speak with which we began — that obscuring simple messages with unnecessary jargon is merely an attempt to make mundane messages sound more important. In effect, such language use is merely a display — and it’s with the concept of ‘display’ that biology and genes come into play.
The classic example of biological display is, of course, the gorgeous tail of a peacock; something that, surprisingly, the father of evolutionary biology, Charles Darwin, found particularly unpleasant: “The sight of a feather in a peacock’s tail,” he noted shortly after the publication of the Origin of Species, “makes me sick!”
Actually, Darwin’s reaction here was far from surprising – after all, the peacock’s flashy appendage seemed impossible to explain through Darwin’s freshly formulated theory of natural selection. How on earth could such a cumbersome and impractical adornment — surely a handicap in evading predators — have survived natural selection’s remorseless ‘struggle for existence’?
Darwin’s ingenious solution to this dilemma was his theory of sexual selection (1871); briefly, that elaborate traits such as the peacock’s tail (or its incredible spider equivalent) could arise, even at the expense of individual survival, if the traits ultimately increased an individual’s reproductive success.
Put simply, the peacock’s tail is a reproductive organ (albeit, a particularly large and protruding one) that functions merely to attract females. And as peahens prefer to mate with males that possess the most impressive tails, these males end up with more offspring — offspring that, moreover, tend to be similarly well-endowed and similarly willing to advertise the fact. The antlers of a stag are another example of such (equally large and protruding) reproductive organs; although here, these are sexually selected weapons as much as ornaments, used to fight off other males or to protect the territory and resources needed to attract females. (In humans, women’s breasts and men’s V-shaped torsos are thought to be sexually selected traits, designed more to attract mates than to aid survival.)
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
A modern refinement to Darwin’s original theory is that sexually selected traits are also signals of ‘good genes’. (Though see Richard Plum’s recent Pulitzer-nominated The Evolution of Beauty for a critique of this idea). In the case of peacocks, for example, a glorious tail indicates that its owner has the wherewithal not only to escape enemies, but also the genetic fitness to overcome the environmental stresses (such as disease or parasites or shortage of food) that might otherwise interfere with the tail’s development. If the ‘good genes’ theory is correct, then peahens select their preferred partners less on explicit aesthetic grounds and more on implicit genetic ones.
All well and good — but what has this to do with the egregious examples of language use mentioned above?
According to evolutionary psychologist Geoffrey Miller, for example, many aspects of human language — including story-telling and singing, or wordplay and humor — are likely the result of sexual selection, with linguistic competence functioning as a signal of the cognitive abilities of the speaker. A large vocabulary, for example, indicates intelligence and education (plus the resources to pay for it), while quick-witted repartee similarly shows an active, fully-functioning brain, along with an engaging, entertaining personality. And such cunning linguistics, in Miller’s view, is ultimately aimed at one thing only: reproductive success.
Indeed, in his seminal The Mating Mind (2000), Miller argues that the human brain itself is primarily a product of sexual rather than natural selection: “an entertainment system” designed principally to stimulate and attract other brains; in other words, the idea that our incredible cognitive abilities have evolved, “like the peacock’s tail, for courtship and mating”.
At first glance, Miller’s claim that our brains (and our intelligence) are largely geared towards sexual display runs contrary to a far more intuitive notion: that possessing such brains/intelligence provides a purely functional advantage in the struggle for life (building tools, planning hunting, outwitting rival humans, and the like).
Yet Miller’s argument makes sense of why the human brain, like the peacock’s overly elaborate tail, appears unnecessarily complex — after all, a brain capable of appreciating art or music or math (or even dad jokes) seems somewhat excessive for the Pleistocene environment in which it evolved. Why would naked, bipedal apes like us possess brains that (to steal from the title of a recent text on human evolution) can understand the universe?
Sexual selection also helps explain otherwise puzzling aspects of modern human social behaviour. In a later book, Miller amusingly illustrates this by asking: “Why would the world’s most intelligent primate buy a Hummer H1 Alpha sport utility vehicle”? Darwinian sexual selection provides a good answer, he argues: “Humans evolved in small social groups in which image and status were all-important, not only for survival but for attracting mates, impressing friends and rearing children.”
In our ancestral environment, possessing a bearskin or a stone tool (or, more especially, the ability to obtain the former or craft the latter) would have afforded such all-important status; in the modern consumer world, a Hummer (or an equally expensive equivalent) does much the same job.
But even today, it’s not just material possessions that enhance image. As already suggested, humor or story-telling or singing abilities are also attractive to others. (And, of course, such attraction can be for both material assets and individual abilities: for instance, the sexual allure of Rolling Stones crooner Mick Jagger, whose youngest child was born well after he had become a great grandfather, is perhaps now due as much to his fame, fortune and followers as to his wonderful voice. And his rugged good looks, of course.)
And this is where we can return, at long last, to Denis Dutton. In The Art Instinct, Dutton suggests that artistic expression (much like humor or Hummer-buying) is a sexually selected human characteristic; in brief, that, in evolutionary terms, an artist’s explicit display of virtuosity is also an implicit signal of the quality of his or her genes. Given the ‘nature red in tooth and claw’ view of sexual selection, something as inessential as art — a feature of all human societies — is difficult to explain; unless, of course, it enhanced our ancestors’ individual reproductive success.
Dutton’s is an intriguing idea of the origins of art (one that’s at odds with the cultural explanations of prevailing art theory). But it also provides a satisfying answer to an otherwise puzzling aspect of art appreciation — the strong negative reactions people have to discovering that a supposed Old Master, say, is actually a cunningly executed fake. If both the original and the imitation are virtually indistinguishable, what real difference does it make?
According to Dutton’s Darwinian account, such fakery would make a great deal of difference if our focus is aimed not solely on aesthetic value but rather on the genetic worth of the artist. We cannot help but feel cheated by forgeries because, in our ancestral past, falling for a con artist rather than a real artist could have carried a reproductive cost (i.e., mating with an inferior partner).
The annoyance caused by corporate-speak, therefore, likely has the same origin — the underlying feeling that it too is false, an attempt to disguise something of little worth as something of value. Unfortunately, given that we have an evolved susceptibility for showy language — that which, in the past, would have been a good signal of a good brain (and the good genes that built it) — simple plain English can sound, well, simply too plain. Whether we like it or not, we’re often impressed (or tricked) by others’ language use — think politicians, preachers and marketeers.
To call it linguistic peacocking makes instant sense; behavior aimed at making an impression, winning friends and influencing people. At a deeper level, hidden from our conscious minds, it’s motivated by the same impulses (and indeed the same hormones) that cause the real peacock to strut its stuff, with the ultimate aim — you guessed it — simple reproductive success. And deeper still, of course, are the underlying genes that direct the behavior that’s given the world corporate-speak, country music and cosmetics — plus all the other wonders of human art and science.
Darwin’s theory of sexual selection explains so many otherwise baffling aspects of human behavior — including this overly elaborate essay on his ingenious idea.
Patrick Whittle has a PhD in philosophy and is a freelance writer with a particular interest in the social and political implications of modern biological science. Follow him on his website patrickmichaelwhittle.com or on X@WhittlePM
This article previously appeared on the GLP July 14, 2020.
Does the controversial weedkiller Roundup, made by Bayer and marketed in generic form by more than 30 companies as glyphosate, pose a cancer risk to humans?
Protestors in Europe, stirred by inflammatory campaigns launched by environmental activist groups, say ‘no’.
Is their opposition to the herbicide grounded in science or ideology? There are these facts that all sides agree on: As the most used weedkiller in the world, glyphosate is ubiquitous. Gardeners and other applicators have been spraying the herbicide for decades and the general population is exposed to it through micro-traces in our food. But does that exposure pose harm to human health or the environment?
What do environmental groups maintain?
Prominent activist groups on the environmental left have coalesced around the belief that there is overwhelming proof that glyphosate is carcinogenic and a killer, posing dangers to humans and the environment. Here are a few recent statements by high-profile anti-glyphosate campaigners:
Out of 2,310 urine samples taken from Americans intended to be representative of the population, CDC found that 1,885 contained detectable levels of glyphosate.
Children in the US are regularly exposed to this cancer-causing weedkiller through the food they eat virtually every day.
Reading these proclamations and without much background in science, it’s understandable that most people and many journalists believe that the scientific consensus confirms that glyphosate can cause cancer. This debate is particularly raucous in Europe, which is considering reauthorizing its use across the EU for ten years — much to the howling of environmental groups.
What does the research say?
Yes, glyphosate is found in many people’s urine samples. Is that a reason for concern? In and of itself, no.
Government researchers have found more than 3,000 chemical compounds regularly show up in urine tests in the US. The data are listed in the Urine Metabolome Database. Almost none of these identified chemicals in our urine are harmful.
“The fact that so many compounds seem to be unique to urine likely has to do with the fact that the kidneys do an extraordinary job of concentrating certain metabolites from the blood,” the researchers said. Glyphosate, like thousands of other metabolites, are safely and regularly excreted from the body.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
But what about the four court cases in which plaintiffs won sizable victories based on claims that glyphosate caused their non-Hodgkin’s lymphomas? And Bayer (which acquired Monsanto in 2018) — the maker of the patented formulation of glyphosate known as Roundup — has already paid out an additional $10.9 billion to settle thousands of suits, with many more individual cases pending. Isn’t that proof enough?
No. Jury decisions do not follow scientific standards of proof. In civil litigation, jurors are told to weigh the evidence differently than would a scientist. Jurors don’t even have to decide “beyond a reasonable doubt” as they would in a criminal case. Instead, they are asked to reach a verdict based on the balance of probability.
In the words of one claimant’s lawyer, Brent Wisner, managing partner of the Church of Scientology law firm Wisner Baum (formerly Baum Hedlund), “Did the exposure cause the plaintiff’s cancer? ‘I’m not sure, but I think so’” is enough is decide in favor of the claimant. Wisner Baum has teamed with presidential aspirant Robert F. Kennedy, Jr. in litigating a number of these cases.
But what about the definitive statements from advocacy groups claiming, as the Center for Food Safety (CFS) has written, “Science has prevailed, and today it is accepted that glyphosate causes cancer.”
The formal and proper scientific term to describe that claim is “rubbish”. Science indeed has prevailed; every major independent oversight and regulatory agency in the world, bar none, has concluded the very opposite of what CFS and other activist NGOs regularly claim: glyphosate does not cause cancer as used.
But what about the International Agency for Research on Cancer, the only organization cited by environmental groups as a definitive source? In fact, IARC doesn’t even assess cancer risks”; rather it evaluates “cancer hazards”. What’s the difference? The risk of cancer depends upon exposure: “The dose makes the poison,” a pithy summary of the science credited to Pericles.
IARC has reviewed more than one thousand substances and found that all but one pose a cancer hazard. In its review of existing glyphosate studies, IARC (which does no original research), did not find sufficient evidence to link dietary exposure to cancer. As for the data on applicators, the studies IARC selected for review showed almost as much evidence that glyphosate prevented cancer as might cause it.
IARC has developed a notorious reputation over the past decade. Its conclusions are often perplexing and in contradiction to global risk agencies, and they often contradict mainstream science. This past summer. IARC issued a monograph claiming that the artificial sweetener aspartame was cancer-causing — a finding not supported by any risk agency in the world.
Among the chemicals or activities that IARC has determined pose more of a cancer hazard than glyphosate: getting a suntan; drinking wine or beer; eating salami; consuming Chinese-style salted fish; and taking oral contraceptives used safely by tens of millions of women every year.
Stated simply, IARC’s controversial cancer warning about glyphosate means little when set against thousands of studies that conclude the weedkiller poses minimal health and environmental risk. Yet it’s the 2015 IARC hazard conclusion, and only that limited review, that activists cite.
The global science community has looked in aggregate at thousands of studies and unanimously concluded that glyphosate does not pose a cancer risk if used as directed. The infographic below (which is downloadable in pdf form), summarizes 24 studies, almost all released after IARC’s 2015 report. The most recent assessment was released this past summer. For the second time in eight years, the European Food Safety Authority concluded:
The assessment of the impact of glyphosate on the health of humans, animals and the environment did not identify critical areas of concern…. It is the most comprehensive and transparent assessment of a pesticide that the EFSA and the EU Member States have ever carried out.
To accept the validity of ideology-tainted claims that glyphosate causes cancer, one would have to believe that two dozen independent agencies in the US, Canada, Europe, Asia, South America, Africa, Australia, and New Zealand, and three separate divisions of WHO, are coordinating in a scheme to suppress evidence of glyphosate’s cancer-causing dangers.
No, there is not a coordinated worldwide conspiracy. It’s just straightforward science: glyphosate is safe as used and there is no serious scientific debate.
Jon Entine is the founding executive director of the Genetic Literacy Project, and winner of 19 major journalism awards. He has written extensively in the popular and academic press on agricultural and population genetics. You can follow him on X @JonEntine
The deluge of the use of the term “natural” for product promotion continues unabated. But perhaps it’s getting stale because KinderFarms, Jessica Biel’s company, is selling stuff like Tylenol and Benadryl with the promise of avoiding “artificial petrochemicals.” That ignores the fact that these drugs are all made from just that. Nope, no kindness or farms. Just another misleading ad campaign.
I just got the following email from KinderFarms, a “natural” kid’s medicine company that objects to all the “unnatural” things found in children’s OTC medications. Damn, they picked the wrong person to spam with this.
Hi Josh,
It’s hard to believe it’s already almost back to school season! Our client, KinderFarms, is more than prepared with back to school essentials that every family needs this school year!
KinderFarms, co-founded by Jessica Biel, is committed to transforming the family health products industry by offering clean, organic options that eliminate anything artificial. Here are some great products KinderFarms has to offer for your kids return to school.
Biel isn’t the first to try fishy advertising to sell kids’ medicines. Back in 2021, I wrote about a seemingly disingenuous ad campaign by a new company named Genexa, whose motto is “the first clean medicine company,” whatever that means.
At the very least Genexa, by choosing this slogan, implied that the acetaminophen (aka Tylenol) in their products was “clean” while the other 250 billion pills that are produced annually by other companies must be “dirty” by comparison.
We use the same active ingredients as the name brands on shelf today, which have gone through years of scientific testing to make sure they’re safe for you and the ones you love.
Let me pose a challenge. Locate one person in the solar system who “loves” acetaminophen, a largely ineffective and potentially deadly drug, which I reviewed in 2017 and 2023.
And then there’s this:
Our products are Certified Gluten-Free, Non-GMO Verified, and free of common allergens. It’s what people deserve.
It is indisputable that the most dangerous ingredient in this “clean” acetaminophen is acetaminophen itself, a known liver toxin, which sent more than 78,000 people in the US to emergency departments during the two years from January 2006 through December 2007. The term “If you polish a turd, it’s still a turd” comes to mind.
Update: At that time, I gave Genexa the benefit of the doubt. Was it possible that its founders were really concerned (although misguided) about “dirty Tylenol” and were making an honest effort to improve children’s medicine? That benefit of the doubt is now rescinded. Here’s why (Figure 1).
Arnica is a “homeopathic medicine;” it’s not a medicine at all. Rather it’s a bottle of sweet nothings that cannot possibly be useful for anything except Genexa’s bottom line. Sorry guys, your credibility is shot. But take heart. Homeopathic “remedies,” whether water or sugar, could be quite clean…
…but not kind: Biel’s KinderFarms® gets in on the action
Apparently not content to be left out of the anthropomorphism of OTC drugs, in 2022, the actress Jessica Biel and her business partner Jeremy Adams began selling a line of “benevolent” children’s medication called KinderFarms®, as if the drugs had the capacity for kindness or were somehow grown on a farm. The company’s website tells its story. Here is part of it.
As they peeled back the labels of the common children’s over-the-counter medicines, they were surprised to find artificial ingredients, petrochemicals, and fillers they didn’t feel comfortable giving their children, especially when they were sick.
Which begs a few questions…
Why would these “artificial petrochemicals” be given to children when they were healthy?
Is “didn’t feel comfortable” a scientifically rigorous assessment of the risks and benefits of a drug, especially to Biel, who in 2019 lobbied with RFK Jr. on vaccine policy? (1)
Is there really anything natural about these medicines?
No, not even close
The “farm” that the company’s name suggests doesn’t much look like a place where zucchinis are growing all over the place. Or an idyllic setting filled with cute little lambs.
No, it looks like Figure 2. This is how “kinder,” “natural,” ‘petrochemical-free” acetaminophen (Tylenol) is actually made.
Is there anything “natural” about any of this? The answer is technically yes, provided that you acknowledge that petroleum-based chemicals themselves are natural since petroleum itself is natural. Which is not such a bad assumption, since crude oil is the product of plant and animal life sitting in the ground for millions of years. But I doubt this nuance is built into Genexa or KinderFarms ads. (2)
Is chemical kindness possible?
Chemicals themselves cannot be kind, unkind, or experience any emotion whatsoever. But kindness is possible among the workers who work at the chemical plants. Perhaps this is the basis for the name of the company.
Notes:
(1) RFKs Instagram post of the collaboration: “Please say thank you to the courageous @jessicabiel for a busy and productive day at the California State House.” Uh-huh.
(2) KinderFarms also sells Benadryl. (Here is a YouTube video (for masochists only) showing its synthesis.) Although it is only 3:44 in length, it is unlikely you’ll make it through. Good luck understanding a single word of it. Other products the company sells are KinderMed™ Kids’ Cough & Congestion (guaifenesin and dextromethorphan, both synthetic), KinderMed™ Kids’ Nighttime Cold & Cough (Benadryl and phenylephrine, both synthetic). I could go on…
Dr. Josh Bloom is Executive Vice President of the American Council on Science and Health. He has published more than 60 op-eds in numerous periodicals, including The Wall Street Journal, Forbes, and New Scientist. Follow him on X @JoshBloomACSH
A version of this article was originally posted at the American Council on Science and Health and is reposted here with permission. The American Council on Science and Health can be found on X @ACSHorg
Malaria remains one of the world’s deadliest diseases. Each year malaria infections result in hundreds of thousands of deaths, with the majority of fatalities occurring in children under five. The Centers for Disease Control and Prevention recently announced that five cases of mosquito-borne malaria were detected in the United States, the first reported spread in the country in two decades.
Fortunately, scientists are developing safe technologies to stop the transmission of malaria by genetically editing mosquitoes that spread the parasite that causes the disease. Researchers at the University of California San Diego led by Professor Omar Akbari’s laboratory have engineered a new way to genetically suppress populations of Anopheles gambiae, the mosquitoes that primarily spread malaria in Africa and contribute to economic poverty in affected regions. The new system targets and kills females of the A. gambiae population since they bite and spread the disease.
Publishing July 5 in the journal Science Advances, first-author Andrea Smidler, a postdoctoral scholar in the UC San Diego School of Biological Sciences, along with former master’s students and co-first authors James Pai and Reema Apte, created a system called Ifegenia, an acronym for “inherited female elimination by genetically encoded nucleases to interrupt alleles.” The technique leverages the CRISPR technology to disrupt a gene known as femaleless (fle) that controls sexual development in A. gambiae mosquitoes.
Scientists at UC Berkeley and the California Institute of Technology contributed to the research effort.
Ifegenia works by genetically encoding the two main elements of CRISPR within African mosquitoes. These include a Cas9 nuclease, the molecular “scissors” that make the cuts and a guide RNA that directs the system to the target through a technique developed in these mosquitoes in Akbari’s lab. They genetically modified two mosquito families to separately express Cas9 and the fle-targeting guide RNA.
“We crossed them together and in the offspring it killed all the female mosquitoes,” said Smidler, “it was extraordinary.” Meanwhile, A. gambiae male mosquitoes inherit Ifegenia but the genetic edit doesn’t impact their reproduction. They remain reproductively fit to mate and spread Ifegenia. Parasite spread eventually is halted since females are removed and the population reaches a reproductive dead end. The new system, the authors note, circumvents certain genetic resistance roadblocks and control issues faced by other systems such as gene drives since the Cas9 and guide RNA components are kept separate until the population is ready to be suppressed.
“We show that Ifegenia males remain reproductively viable, and can load both fle mutations and CRISPR machinery to induce fle mutations in subsequent generations, resulting in sustained population suppression,” the authors note in the paper. “Through modeling, we demonstrate that iterative releases of non-biting Ifegenia males can act as an effective, confinable, controllable and safe population suppression and elimination system.”
Traditional methods to combat malaria spread such as bed nets and insecticides increasingly have been proven ineffective in stopping the disease’s spread. Insecticides are still heavily used across the globe, primarily in an effort to stop malaria, which increases health and ecological risks to areas in Africa and Asia.
Smidler, who earned a PhD (biological sciences of public health) from Harvard University before joining UC San Diego in 2019, is applying her expertise in genetic technology development to address the spread of the disease and the economic harm that comes with it. Once she and her colleagues developed Ifegenia, she was surprised by how effective the technology worked as a suppression system.
“This technology has the potential to be the safe, controllable and scalable solution the world urgently needs to eliminate malaria once and for all,” said Akbari, a professor in the Department of Cell and Developmental Biology. “Now we need to transition our efforts to seek social acceptance, regulatory use authorizations and funding opportunities to put this system to its ultimate test of suppressing wild malaria-transmitting mosquito populations. We are on the cusp of making a major impact in the world and won’t stop until that’s achieved.”
The researchers note that the technology behind Ifegenia could be adapted to other species that spread deadly diseases, such as mosquitoes known to transmit dengue (break-bone fever), chikungunya and yellow fever viruses.
Mario Aguilera is the Director of Communications for Biological Sciences at UC San Diego. Find Mario on X @mario_maguilera
A version of this article was originally posted at the University of San Diego and has been reposted here with permission. Any reposting should credit the original author and provide links to both the GLP and the original article. Find PLOS on X@PLOS
One in four Americans currently suffers from anxiety or depression, correlating directly to serotonin levels found in the body. Normal serotonin levels help with your emotional state and digestion, sleep, wound healing, sexual desire, and bone density. However, the most common issues with low serotonin levels are related to mental health.
Serotonin is a neurotransmitter known as the “happy hormone.” It is vital in managing stress, supporting mental well-being, enhancing social interactions, promoting better sleep, and improving cognitive function and emotional resilience.
And its benefits don’t stop there. In bones, serotonin regulates bone density and remodeling, with high levels linked to increased bone density and a reduction in potential risk of osteoporosis, while promoting bone formation. Serotonin also plays a role in wound healing by aiding in blood clotting through platelet release and influencing immune response and tissue repair processes.
How does it actually do all of that? It plays a crucial role in the central nervous system as it acts as a neurotransmitter. It carries messages between the nerve cells in the brain and throughout the body.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
The gut-brain axis refers to the bidirectional communication between the gut (gastrointestinal tract) and the brain. It involves complex interactions between the central nervous system (CNS) and the enteric nervous system (ENS), which is often referred to as the “second brain” of the body due to its extensive network of neurons in the gut.
Serotonin plays a critical role in this communication system, serving as a messenger molecule that helps regulate various physiological processes and behaviors. The majority of serotonin in the body is found in the gut, serving multiple functions:
Changes in gut serotonin levels can have major impacts on many bodily functions. Having balanced serotonin levels in the gut helps normalize various gastrointestinal functions, including bowel movements and intestinal motility. Imbalances in gut serotonin levels have been linked to conditions like irritable bowel syndrome (IBS).
It can also affect feelings of satiety and control eating behavior, while also playing a role in the gut’s immune response, helping regulate inflammation and immune cell activity.
The gut-brain axis is a fascinating area of research that highlights the intricate connections between various bodily systems. Serotonin’s influence on the gut and brain underscores its role as a key mediator in the body’s communication network.
Serotonin-boosting foods
Okay, so now I know it can boost not only my mood, but fortify my immune system, help me regulate my hunger, positively impact my digestion and decrease inflammation, but should I take a pill? Is there a pill?
Here at Dirt to Dinner, after much research, we have included that it is always better to seek nutrients through whole foods. Not only is the supplement industry unregulated which makes it hard to know what you are taking, but most of the time, nutrients are more bioavailable for the body to use in its whole food form.
Incorporating serotonin-boosting foods into your diet is a natural and accessible way to promote emotional and physical health and the many other benefits of serotonin.
Nutrients in foods such as complex carbohydrates, vitamin B6, omega 3s, and tryptophan all work together to do just that! For instance, a meal of salmon, quinoa, and spinach with sliced bananas for dessert will work well together to produce the serotonin you need!
Tryptophan-rich foods
Tryptophan is an essential amino acid that our bodies can’t produce alone. Consuming foods high in tryptophan can increase serotonin levels in our gut and brain, as the amino acid synthesizes to become serotonin in your body.
Good news for you, most people already consume more than double the recommended amount, typically 900-1000 milligrams daily as part of their regular diets. Some tryptophan-dense foods are cod, spirulina, nuts and seeds, and legumes.
Here’s a fun fact to share…
Most people think turkey has the most tryptophan, but take a look at the chart on the left!
Complex carbohydrates
Consuming complex carbohydrates can also boost serotonin production. These carbohydrates increase insulin levels, which aids in the absorption of amino acids, including tryptophan, into the brain. Some excellent sources of complex carbohydrates include whole grains ( like oats, quinoa, farrow, and brown rice), sweet potatoes, and legumes ( including beans, lentils, and peas).
Not sure how to tell the difference between a complex carb and a simple carb? Here’s a good trick: most whole, unprocessed foods contain complex carbs. Avoid processed foods and “white” foods, which are mostly comprised of simple carbs.
When you eat a meal rich in carbohydrates from whole grains, insulin stimulates the uptake of other amino acids into cells, leaving tryptophan with relatively fewer competitors. As a result, more tryptophan can be converted into serotonin, contributing to a more balanced and positive mood.
Complex carbohydrates provide a slow and steady release of energy compared to simple carbohydrates. This sustained energy release helps stabilize blood sugar levels, preventing rapid spikes and crashes. Fluctuations in blood sugar levels can affect mood and energy levels, and stable blood sugar can reduce emotional ups and downs.
Vitamin B6 & serotonin conversion
Vitamin B6 helps the body convert tryptophan into serotonin. Including foods high in vitamin B6 can enhance this serotonin synthesis.
Some notable sources of vitamin B6 are fish (like tuna, salmon, and trout), poultry, and bananas. B6 is critical in allowing the body to utilize serotonin to assist with our cognitive and emotional functioning.
Curious about other B6-rich foods? Print out this handy chart and stick it on your fridge!
Omega-3 fatty acids
The relationship between omega-3 fatty acids and serotonin involves multiple interconnected mechanisms that can impact mood and emotional well-being. Omega-3 fatty acids are essential for brain health and function, particularly eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA).
These fatty acids are incorporated into cell membranes, influencing membrane fluidity and receptor activity. By regulating the cell membrane, omega 3s can enhance the function of serotonin receptors, making them more responsive to serotonin.
Studies suggest that omega-3 fatty acids, when consumed in sufficient amounts (at least 200mg a day), may contribute to maintaining healthy serotonin levels.
Which foods are excellent sources of omega 3s? At the top of the list are fatty fish (tuna, salmon, trout, herring, anchovies), chia seeds, and flaxseeds.
What else can we do?
Want to boost the effects of these foods? Get good sleep. Serotonin is the first step in melatonin production, a hormone we produce that regulates sleep-wake cycles. Ensuring you are making enough serotonin can support healthy sleep patterns and improve sleep quality, leading to better overall health and productivity.
The bottom line
Incorporating foods that boost serotonin production in your diet can be a natural and effective way to enhance mood and overall well-being. Tryptophan-rich foods, complex carbohydrates, vitamin B6 sources, and foods rich in omega-3 fatty acids can all contribute to increased serotonin levels in the brain and an overall healthier you!
Hayley N. Phillip is a graduate of the University of California Santa Barbara with degrees in Sociology and Marketing. Hayley leads the Dirt to Dinner team in debunking popular fad diets, fast-nutrition, and myths about ‘quick’ dietary fixes. Hayley also researches and writes about the intersectionality of regeneration and sustainable growing methods that will safely produce enough food for future generations.
A version of this article was originally posted at Dirt To Dinner and has been reposted here with permission. Any reposting should credit the original author and provide links to both the GLP and the original article. Find Dirt To Dinner on Twitter @Dirt_To_Dinner
Historically, biotech has been primarily associated with food, addressing such issues as malnutrition and famine.
Today, biotechnology is most often associated with the development of drugs. But drugs are hardly the future of biotech. We’ve entered the Fourth Industrial Revolution, and genetics are on a new level. Biotech is paving a way for a future open to imagination, and that’s kind of scary.
The next ten years will surely prove exciting as artificial intelligence and biotechnology merge man and machine…
The history of biotechnology can be divided into three distinct phases:
Ancient Biotechnology
Classical Biotechnology
Modern Biotechnology
1. Ancient Biotechnology (Pre-1800)
Most of the biotech developments before the year 1800 can be termed as ‘discoveries’ or ‘developments’. If we study all these developments, we can conclude that these inventions were based on common observations about nature.
Humans have used biotechnology since the dawn of civilization.
After domestication of food crops (corn, wheat) and wild animals, man moved on to other new observations like cheese and curd. Cheese can be considered as one of the first direct products (or by-product) of biotechnology because it was prepared by adding rennet (an enzyme found in the stomach of calves) to sour milk.
Yeast is one of the oldest microbes that have been exploited by humans for their benefit. The oldest fermentation was used to make beer in Sumeria and Babylonia as early as 7,000BCE.
By 4,000BCE, Egyptians used yeasts to bake leavened bread.
Another ancient product of fermentation was wine, made in Assyria as early as 3,500BCE.
The Chinese developed fermentation techniques for brewing and cheese making.
500 BCE: In China, the first antibiotic, moldy soybean curds, is put to use to treat boils.
Hippocrates treated patients with vinegar in 400 BCE.
In 100BCE, Rome had over 250 bakeries which were making leavened bread.
A.D. 100: The first insecticide is produced in China from powdered chrysanthemums.
The use of molds to saccharify rice in the koji process dates back to at least A.D. 700.
13th century: The Aztecs used Spirulina algae to make cakes.
One of the oldest examples of crossbreeding for the benefit of humans is mule. Mule is an offspring of a male donkey and a female horse. People started using mules for transportation, carrying loads, and farming, when there were no tractors or trucks.
By the 14th century AD, the distillation of alcoholic spirits was common in many parts of the world.
Vinegar manufacture began in France at the end of the 14th century.
1663: Cells are first described by Hooke.
1673-1723: In the seventeenth century, Antonie van Leeuwenhoek discovered microorganisms by examining scrapings from his teeth under a microscope.
1675: Leeuwenhoek discovers protozoa and bacteria.
1761: English surgeon Edward Jenner pioneers vaccination, inoculating a child with a viral smallpox vaccine.
2. Classical Biotechnology (1800-1945)
The Hungarian Károly Ereky coined the word “biotechnology” in Hungary during 1919 to describe a technology based on converting raw materials into a more useful product. In a book entitled Biotechnologie, Ereky further developed a theme that would be reiterated through the 20th century: biotechnology could provide solutions to societal crises, such as food and energy shortages.
1773-1858: Robert Brown discovered the nucleus in cells.
1802: The word “biology” first appears.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
1822-1895: Vaccination against small pox and rabies developed by Edward Jenner and Louis Pasteur.
In 1850, Casimir Davaine detected rod-shaped objects in the blood of anthrax-infected sheep and was able to produce the disease in healthy sheep by inoculation of such blood.
1855: The Escherichia coli bacterium is discovered. It later becomes a major research, development, and production tool for biotechnology.
In 1868, Fredrich Miescher reported nuclein, a compound that consisted of nucleic acid that he extracted from white blood cells.
1870: Breeders crossbreed cotton, developing hundreds of varieties with superior qualities.
1870: The first experimental corn hybrid is produced in a laboratory.
By 1875, Pasteur of France and John Tyndall of Britain finally demolished the concept of spontaneous generation and proved that existing microbial life came from preexisting life.
1876: Koch’s work led to the acceptance of the idea that specific diseases were caused by specific organisms, each of which had a specific form and function.
In 1881, Robert Koch, a German physician, described bacterial colonies growing on potato slices (First ever solid medium).
In 1888, Heinrich Wilhelm Gottfried Von Waldeyer-Hartz, a German scientist, coined the term ‘Chromosome.’
In 1909, the term ‘Gene’ had already been coined by Wilhelm Johannsen (1857-1927), who described ‘gene’ as carrier of heredity. Johannsen also coined the terms ‘genotype’ and ‘phenotype.’
1909: Genes are linked with hereditary disorders.
1911: American pathologist Peyton Rous discovers the first cancer-causing virus.
1915: Phages, or bacterial viruses, are discovered.
1919: The word “biotechnology” is first used by a Hungarian agricultural engineer.
Pfizer, which had made fortunes using fermenting processes to produce citric acid in the 1920s, turned its attention to penicillin. The massive production of penicillin was a major factor in the Allied victory in WWII.
1924: start of Eugenic Movement in the US.
The principle of genetics in inheritance was redefined by T.H. Morgan, who showed inheritance and the role of chromosomes in inheritance by using fruit flies. This landmark work was named, ‘The theory of the Gene in 1926.”
Alexander Fleming discovered ‘penicillin’ the antibacterial toxin from the mold Penicillium notatum, which could be used against many infectious diseases. Fleming wrote, “When I woke up just after dawn on September 28, 1928, I certainly didn’t plan to revolutionize all medicine by discovering the world’s first antibiotic, or bacteria killer.”
1933: Hybrid corn is commercialized.
In 1940, a team of researchers at Oxford University found a way to purify penicillin and keep it stable.
1941: The term “genetic engineering” is first used by a Danish microbiologist.
1942: The electron microscope is used to identify and characterize a bacteriophage- a virus that infects bacteria.
1942: Penicillin is mass-produced in microbes for the first time.
3. Modern Biotechnology (1945-present)
The Second World War became a major impediment in scientific discoveries. After the end of the second world war some, very crucial discoveries were reported, which paved the path for modern biotechnology.
The origins of biotechnology culminate with the birth of genetic engineering. There were two key events that have come to be seen as scientific breakthroughs beginning the era that would unite genetics with biotechnology: One was the 1953 discovery of the structure of DNA, by Watson and Crick, and the other was the 1973 discovery by Cohen and Boyer of a recombinant DNA technique by which a section of DNA was cut from the plasmid of an E. coli bacterium and transferred into the DNA of another. Popularly referred to as “genetic engineering,” it came to be defined as the basis of new biotechnology.
In Britain, Chaim Weizemann (1874–1952) developed bacterial fermentation processes for producing organic chemicals such as acetone and cordite propellants. During WWII, he worked on synthetic rubber and high-octane gas.
1950s: The first synthetic antibiotic is created.
1951: Artificial insemination of livestock is accomplished using frozen semen.
In 1953, JD Watson and FHC Crick for the first time cleared the mysteries around the DNA as a genetic material, by giving a structural model of DNA, popularly known as, ‘Double Helix Model of DNA.’
1954: Dr. Joseph Murray performs the first kidney transplant between identical twins.
1955: An enzyme, DNA polymerase, involved in the synthesis of a nucleic acid, is isolated for the first time.
1955: Dr. Jonas Salk develops the first polio vaccine. The development marks the first use of mammalian cells (monkey kidney cells) and the first application of cell culture technology to generate a vaccine.
1957: Scientists prove that sickle-cell anemia occurs due to a change in a single amino acid in hemoglobin cells
1958: Dr. Arthur Kornberg of Washington University in St. Louis makes DNA in a test tube for the first time.
Edward Tatum (1909–1975) and Joshua Lederberg (1925–2008) shared the 1958 Nobel Prize for showing that genes regulate the metabolism by producing specific enzymes.
1960: French scientists discover messenger RNA (mRNA).
1961: Scientists understand genetic code for the first time.
1962: Dr. Osamu Shimomura discovers the green fluorescent protein in the jellyfish Aequorea victoria. He later develops it into a tool for observing previously invisible cellular processes.
1963: Dr. Samuel Katz and Dr. John F. Enders develop the first vaccine for measles.
1964: The existence of reverse transcriptase is predicted.
At a conference in 1964, Tatum laid out his vision of “new” biotechnology: “Biological engineering seems to fall naturally into three primary categories of means to modify organisms. These are:
The recombination of existing genes, or eugenics.
The production of new genes by a process of directed mutation, or genetic engineering.
Modification or control of gene expression, or to adopt Lederberg’s suggested terminology, euphenic engineering.”
1967: The first automatic protein sequencer is perfected.
1967: Dr. Maurice Hilleman develops the first American vaccine for mumps.
1969: An enzyme is synthesized in vitro for the first time.
1969: The first vaccine for rubella is developed.
1970: Restriction enzymes are discovered.
1971: The measles/mumps/rubella combo-vaccine was formed.
1972: DNA ligase, which links DNA fragments together, is used for the first time.
1973: Cohen and Boyer perform the first successful recombinant DNA experiment, using bacterial genes.
In 1974, Stanley Cohen and Herbert Boyer developed a technique for splicing together strands of DNA from more than one organism. The product of this transformation is called recombinant DNA (rDNA).
Kohler and Milestein in 1975 came up with the concept of cytoplasmic hybridization and produced the first ever monoclonal antibodies, which has revolutionized diagnostics.
Techniques for producing monoclonal antibodies were developed in 1975.
1975: Colony hybridization and Southern blotting are developed for detecting specific DNA sequences.
1976: Molecular hybridization is used for the prenatal diagnosis of alpha thalassemia.
1978: Recombinant human insulin is produced for the first time.
1978: with the development of synthetic human insulin the biotechnology industry grew rapidly.
1979: Human growth hormone is synthesized for the first time.
In the 1970s-80s, the path of biotechnology became intertwined with that of genetics.
By the 1980s, biotechnology grew into a promising real industry.
1980: Smallpox is globally eradicated following 20-year mass vaccination effort.
In 1980, The U.S. Supreme Court (SCOTUS), in Diamond v. Chakrabarty, approved the principle of patenting genetically engineered life forms.
1981: Scientists at Ohio University produce the first transgenic animals by transferring genes from other animals into mice.
1981: The first gene-synthesizing machines are developed.
1981: The first genetically engineered plant is reported.
1982: The first recombinant DNA vaccine for livestock is developed.
1982: The first biotech drug, human insulin produced in genetically modified bacteria, is approved by FDA. Genentech and Eli Lilly developed the product. This is followed by many new drugs based on biotechnologies.
1983: The discovery of HIV/AIDS as a deadly disease has helped tremendously to improve various tools employed by life-scientist for discoveries and applications in various aspects of day-to-day life.
In 1983, Kary Mullis developed polymerase chain reaction (PCR), which allows a piece of DNA to be replicated over and over again. PCR, which uses heat and enzymes to make unlimited copies of genes and gene fragments, later becomes a major tool in biotech research and product development worldwide.
1983: The first artificial chromosome is synthesized.
In 1983, the first genetic markers for specific inherited diseases were found.
1983: The first genetic transformation of plant cells by TI plasmids is performed.
In 1984, the DNA fingerprinting technique was developed.
1985: Genetic markers are found for kidney disease and cystic fibrosis.
1986: The first recombinant vaccine for humans, a vaccine for hepatitis B, is approved.
1986: Interferon becomes the first anticancer drug produced through biotech.
1986: University of California, Berkeley, chemist Dr. Peter Schultz describes how to combine antibodies and enzymes (abzymes) to create therapeutics.
1988: The first pest-resistant corn, Bt corn, is produced.
1988: Congress funds the Human Genome Project, a massive effort to map and sequence the human genetic code as well as the genomes of other species.
In 1988, chymosin (known as Rennin) was the first enzyme produced from a genetically modified source-yeast-to be approved for use in food.
In 1988, only five proteins from genetically engineered cells had been approved as drugs by the United States Food and Drug Administration (FDA), but this number would skyrocket to over 125 by the end of the 1990s.
In 1989, microorganisms were used to clean up the Exxon Valdez oil spill.
1990: The first successful gene therapy is performed on a 4-year-old girl suffering from an immune disorder.
In 1993, The U.S. Food and Drug Administration (FDA) declared that genetically modified (GM) foods are “not inherently dangerous” and do not require special regulation.
1993: Chiron’s Betaseron is approved as the first treatment for multiple sclerosis in 20 years.
1994: The first breast cancer gene is discovered.
1995: Gene therapy, immune-system modulation and recombinantly produced antibodies enter the clinic in the war against cancer.
1995: The first baboon-to-human bone marrow transplant is performed on an AIDS patient.
1995: The first vaccine for Hepatitis A is developed.
1996: A gene associated with Parkinson’s disease is discovered.
1996: The first genetically engineered crop is commercialized.
1997: Ian Wilmut, an Irish scientist, was successful in cloning an adult animal, using sheep as model and naming the cloned sheep ‘Dolly.’
1997: The first human artificial chromosome is created.
1998: A rough draft of the human genome map is produced, showing the locations of more than 30,000 genes.
1998: Human skin is produced for the first time in the lab.
1999: A diagnostic test allows quick identification of Bovine Spongicorm Encephalopathy (BSE, also known as “mad cow” disease) and Creutzfeldt-Jakob Disease (CJD).
1999: The complete genetic code of the human chromosome is deciphered.
2000: Kenya field-tests its first biotech crop, virus-resistant sweet potato.
Craig Venter, in 2000, was able to sequence the human genome.
2001: The sequence of the human genome is published in Science and Nature, making it possible for researchers all over the world to begin developing treatments.
2001: FDA approves Gleevec® (imatinib), a gene-targeted drug for patients with chronic myeloid leukemia. Gleevec is the first gene-targeted drug to receive FDA approval.
2002: EPA approves the first transgenic rootworm-resistant corn.
2002: The banteng, an endangered species, is cloned for the first time.
2003: China grants the world’s first regulatory approval of a gene therapy product, Gendicine (Shenzhen SiBiono GenTech), which delivers the p53 gene as a therapy for squamous cell head and neck cancer.
In 2003, TK-1 (GloFish) went on sale in Taiwan, as the first genetically modified pet.
2003: The Human Genome Project completes sequencing of the human genome.
2004: UN Food and Agriculture Organization endorses biotech crops, stating biotechnology is a complementary tool to traditional farming methods that can help poor farmers and consumers in developing nations.
2004: FDA approves the first antiangiogenic drug for cancer, Avastin®.
2005: The Energy Policy Act is passed and signed into law, authorizing numerous incentives for bioethanol development.
2006: FDA approves the recombinant vaccine Gardasil®, the first vaccine developed against human papillomavirus (HPV), an infection implicated in cervical and throat cancers, and the first preventative cancer vaccine.
2006: USDA grants Dow AgroSciences the first regulatory approval for a plant-made vaccine.
2006: The National Institutes of Health begins a 10-year, 10,000-patient study using a genetic test that predicts breast-cancer recurrence and guides treatment.
In 2006, the artist Stelarc had an ear grown in a vat and grafted onto his arm.
2007: FDA approves the H5N1 vaccine, the first vaccine approved for avian flu.
2007: Scientists discover how to use human skin cells to create embryonic stem cells.
2008: Chemists in Japan create the first DNA molecule made almost entirely of artificial parts.
2009: Global biotech crop acreage reaches 330 million acres.
In 2009, Sasaki and Okana produced transgenic marmosets that glow green in ultraviolet light (and pass the trait to their offspring).
2009: FDA approves the first genetically engineered animal for production of a recombinant form of human antithrombin.
In 2010, Craig Venter was successful in demonstrating that a synthetic genome could replicate autonomously.
2010: Dr. J. Craig Venter announces completion of “synthetic life” by transplanting synthetic genome capable of self-replication into a recipient bacterial cell.
2010: Harvard researchers report building “lung on a chip” – technology.
In 2010, scientists created malaria-resistant mosquitoes.
2011: Trachea derived from stem cells transplanted into human recipient.
2011: Advances in 3-D printing technology lead to “skin-printing.”
2012: For the last three billion years, life on Earth has relied on two information-storing molecules, DNA and RNA. Now there’s a third: XNA, a polymer synthesized by molecular biologists Vitor Pinheiro and Philipp Holliger of the Medical Research Council in the United Kingdom. Just like DNA, XNA is capable of storing genetic information and then evolving through natural selection. Unlike DNA, it can be carefully manipulated.
2012: Researchers at the University of Washington in Seattle announced the successful sequencing of a complete fetal genome using nothing more than snippets of DNA floating in its mother’s blood.
2013: Two research teams announced a fast and precise new method for editing snippets of the genetic code. The so-called CRISPR system takes advantage of a defense strategy used by bacteria.
2013: Researchers in Japan developed functional human liver tissue from reprogrammed skin cells.
2013: Researchers published the results of the first successful human-to-human brain interface.
2013: Doctors announced that a baby born with HIV had been cured of the disease.
2014: Researchers showed that blood from a young mouse can rejuvenate an old mouse’s muscles and brain.
2014: Researchers figured out how to turn human stem cells into functional pancreatic β cells—the same cells that are destroyed by the body’s own immune system in type 1 diabetes patients.
2014: All life on Earth as we know it encodes genetic information using four DNA letters: A, T, G, and C. Not anymore! In 2014, researchers created new DNA bases in the lab, expanding life’s genetic code and opening the door to creating new kinds of microbes.
2014: For the first time ever, a woman gave birth to a baby after receiving a womb transplant.
In 2014, team of scientists reconstructed a synthetic and fully functional yeast chromosome. A breakthrough seven years in the making, the remarkable advance could eventually lead to custom-built organisms (human organisms included).
2014 & Ebola: Until this year, ebola was merely an interesting footnote for anyone studying tropical diseases. Now it’s a global health disaster. But the epidemic started at a single point with one human-animal interaction — an interaction which has now been pinpointed using genetic research. A total of 50 authors contributed to the paper announcing the discovery, including five who died of the disease before it could be published.
2014: Doctors discovered a vaccine that totally blocks infection altogether in the monkey equivalent of the disease — a breakthrough that is now being studied to see if it works in humans.
2015: Scientists from Singapore’s Institute of Bioengineering and Nanotechnology designed short strings of peptides that self-assemble into a fibrous gel when water is added for use as a healing nanogel.
2015 & CRISPR: scientists hit a number of breakthroughs using the gene-editing technology CRISPR. Researchers in China reported modifying the DNA of a nonviable human embryo, a controversial move. Researchers at Harvard University inserted genes from a long-extinct woolly mammoth into the living cells — in a petri dish — of a modern elephant. Elsewhere, scientists reported using CRISPR to potentially modify pig organs for human transplant and modify mosquitoes to eradicate malaria.
2015: Researchers in Sweden developed a blood test that can detect cancer at an early stage from a single drop of blood.
2015: Scientists discovered a new antibiotic, the first in nearly 30 years, that may pave the way for a new generation of antibiotics and fight growing drug-resistance. The antibiotic, teixobactin, can treat many common bacterial infections, such as tuberculosis, septicaemia, and C. diff.
2015: A team of geneticists finished building the most comprehensive map of the human epigenome, a culmination of almost a decade of research. The team was able to map more than 100 types of human cells, which will help researchers better understand the complex links between DNA and diseases.
2015: Stanford University scientists revealed a method that may be able to force malicious leukemia cells to change into harmless immune cells, called macrophages.
2015: Using cells from human donors, doctors, for the first time, built a set of vocal cords from scratch. The cells were urged to form a tissue that mimics vocal fold mucosa – vibrating flaps in the larynx that create the sounds of the human voice.
2016: A little-known virus first identified in Uganda in 1947—Zika—exploded onto the international stage when the mosquito-borne illness began spreading rapidly throughout Latin America. Researchers successfully isolated a human antibody that “markedly reduces” infection from the Zika virus.
2016: CRISPR, the revolutionary gene-editing tool that promises to cure illnesses and solve environmental calamities, took a major step forward this year when a team of Chinese scientists used it to treat a human patient for the very first time.
2016: Researchers found that an ancient molecule, GK-PID, is the reason single-celled organisms started to evolve into multicellular organisms approximately 800 million years ago.
2016: Stem cells injected into stroke patients re-enable patient to walk.
2016: For the first time, bioengineers created a completely 3D-printed ‘heart on a chip.’
2017: Researchers at the National Institute of Health discovered a new molecular mechanism that might be the cause of severe premenstrual syndrome known as PMDD.
2017: Scientists at the Salk Institute in La Jolla, CA, said they’re one step closer to being able to grow human organs inside pigs. In their latest research they were able to grow human cells inside pig embryos, a small but promising step toward organ growth.
2017: First step taken toward epigenetically modified cotton.
2017: Research reveals different aspects of DNA demethylation involved in tomato ripening process.
2017: Sequencing of green alga genome provides blueprint to advance clean energy, bioproducts.
2017: Fine-tuning ‘dosage’ of mutant genes unleashes long-trapped yield potential in tomato plants.
2017: Scientists engineer disease-resistant rice without sacrificing yield.
2017: Blood stem cells grown in lab for the first time.
2017: Researchers at Sahlgrenska Academy – part of the University of Gothenburg, Sweden – generated cartilage tissue by printing stem cells using a 3D-bioprinter.
2017: Two-way communication in brain-machine interface achieved for the first time.
Today, biotechnology is being used in countless areas including agriculture, bioremediation and forensics, where DNA fingerprinting is a common practice. Industry and medicine alike use the techniques of PCR, immunoassays and recombinant DNA.
Genetic manipulation has been the primary reason that biology is now seen as the science of the future and biotechnology as one of the leading industries.
A version of this article was originally published on Brian Colwell’s website as “A Giant-Sized History of Biotechnology” and has been republished here with permission from the author.
Brian Colwell is a technology futurist with an investment thesis focused on disruptions in this next Industrial revolution. His research areas include agricultural, biotechnology and artificial intelligence. Follow @BrianDColwellon Twitter and at his website.
This article first appeared on the GLP on September 8, 2020.
Anti-GMO activists, many of them financed by organic food companies, claim that the GMO safety consensus is based on biotech industry-funded studies and thus cannot be trusted. Gary Ruskin, co-founder of the organic food advocacy group U.S. Right to Know (USRTK), argues:
The agrichemical companies are unlikely to support research that may undermine their financial interests. Meanwhile, there is a declining amount of public funds available for agricultural research . That means less funding for independent studies to assess health and environmental risks of genetically engineered food and crops.
According to Michael Hansen, a critic of GM foods with Consumers Union, “Look at what the FDA says when they approve a food: ‘It is our understanding that Monsanto has concluded this is safe.’ They just rubber-stamp it.”
The assertion that biotech companies do the research and the government just signs off on it is inaccurate. In practice, companies finance and execute voluntary testing, as that’s the way the US approval process was set up in the 1980s. But absolutely every biotech firm “volunteers.” That’s because the FDA can stop any GMO crop from going to market. Moreover, regulatory review by the USDA and the EPA is mandatory in every sense.
This shared regulatory responsibility is divided up based on each agency’s expertise. The FDA evaluates all foods grown from genetically modified seeds to confirm they are “substantially equivalent” to their conventional counterparts, ensuring that the new foods are nontoxic and nonallergenic. The USDA evaluates GMO crops to see if they will pose a plant pest risk once released into the environment. And as a final layer of regulatory oversight, the EPA evaluates insect- and virus-resistant GMO crops, “plants that are pesticidal in nature,” the agency says, to ensure they won’t pose a threat to the environment or human health.
International standards for industry-funded research are similarly rigorous. The European Food Safety Authority (EFSA) mandates that biotech companies demonstrate their products are substantially equivalent to foods already available in EU supermarkets, before the new items can be sold. Food safety rules established by the UN’s Food and Agricultural Organization (FAO) likewise declare that studies must identify any possible allergen or toxin that may be present in GMO crop varieties before they can be commercially grown. No other foods, including organic products, intended for human consumption face such extensive safety evaluations.
Furthermore, this entire process is subject to extensive peer input and criticism in the form of public comments from independent medical and scientific experts. This virtually eliminates the possibility of “powerful corporations” buying science that favors their economic interests, a practice USRTK’s Ruskin argues is widespread. The Science Media Centre, a UK-based science outreach group, adds:
[T]here are . mechanisms within science to protect experiments from [industry] influence. Experimental design and the peer review system should protect research from bias and, on top of that, [most research] institutes have contracts with industry which allow researchers freedom to publish the facts as they are discovered.
After the initial round of approval studies, independent researchers often do their own analyses, which typically confirm the results of industry studies. For example:
In a meta-review published in a peer-reviewed, high impact factor journal, Critical Review of Biotechnology in 2013, the authors evaluated 1,783 research papers, reviews, relevant opinions, and reports published between 2002 and 2012, a comprehensive process that took over 12 months to complete. The review covered all aspects of GM crop safety, from how the crops interact with the environment to how they could potentially affect the humans and animals who consume them.
The main conclusion to be drawn from the efforts of more than 130 research projects, covering a period of more than 25 years of research, and involving more than 500 independent research groups, is that biotechnology, and in particular GMOs, are not per se more risky than e.g. conventional plant breeding technologies.
A February 2015 study published in the journal Nature Biotechnology also challenged the view that biotech firms control the safety evaluations of their products. For the study, Miguel Sanchez, a scientist with the biotech firm ChileBio, evaluated the funding sources of 698 studies published between 1993 and 2014 that looked at the environmental and human health impacts of GMOs. Sanchez found that 60 percent of the scientists involved in these studies had no financial relationship with biotech companies. Cornell University’s Alliance for Science summarized the study’s conclusions:
58.3% of published papers ‘have no financial or professional COIs, as the authors were not affiliated with companies that develop GM crops and also declared that the funding sources did not come from those companies,’ Sanchez reports.
Why do so many people on the political left and right, who agree on almost nothing, share a deep-seated fear of pesticides? Drugs that seem worthless may still affect our health. Researchers can boost science literacy by making a greater effort to engage the public on social media. But how exactly do they do it?
Join hosts Dr. Liza Dunn and GLP contributor Cameron English on episode 240 of Science Facts and Fallacies as they break down these latest news stories:
Progressives and conservatives agree on very little. But when it comes to the dangers of pesticides, these political opponents often join forces in an attempt to ban the products that farmers rely on to protect their crops. What holds this odd coalition together? Some combination of distrust in public health officials and fear of technology. Scientists may have to develop new ways to address the concerns of these skeptical consumers without undermining their deeply help beliefs.
Americans spend almost $2 billion a year on the decongestant phenylephrine. Those stratospheric sales figures suggest that the drug alleviates common symptoms of congestion, but a large body of evidence indicates that phenylephrine, when taken orally, is largely ineffective, leading the FDA to declare in September that “current scientific data do not support that the recommended dosage of orally administered phenylephrine is effective as a nasal decongestant.” Why, then, do so many people use it? It might just come down to the placebo effect.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
Millions of people rely on social media as their primary source of information on a variety of topics; they also have high trust in scientists. As a result, researchers can promote scientific literacy among the general public by engaging with users on platforms such as Facebook and YouTube, a recent study has found. By partnering with influencers and interacting with people one on one, scientists can make progress in building support for the use of biotech crops and other important technological innovations in agriculture.
Dr. Liza Dunn is a medical toxicologist and the medical affairs lead at Bayer Crop Science. Follow her on X @DrLizaMD
TThe BBC has revised misleading and factually inaccurate statements about different farming systems on its exam revision website BBC Bitesize following a complaint lodged by pro-science think-tank Science for Sustainable Agriculture.
BBC’s original post contained simplistic assertions aimed at students headed for university under General Certificate of Secondary Education and National 5 guidelines such as “intensive farming reduces biodiversity and increases pollution”, “organic milk and beef are produced without using antibiotics”, and “organic farmers do not apply pesticides to their crops”
These factually incorrect statements have now been removed from the site.
In its original letter to BBC director general Tim Davie, SSA pointed out that these statements were factually incorrect. Organic livestock farmers do use antibiotics (including those produced using GMOs), and they do use pesticides, some of which have been shown to have a more toxic and environmentally damaging profile than their synthetic counterparts.
SSA also observed that the BBC’s statement that intensive farming reduces biodiversity and increases pollution is not supported by the scientific evidence. For example, a 10-year study published in Nature in 2018, led by Professor Andrew Balmford, a conservation scientist at Cambridge University, found that the most effective way to keep pace with increasing human demands for food while protecting habitats and preventing further biodiversity loss is through high-tech, high yield production on a smaller footprint, allowing other land to be set aside for nature and carbon sequestration.
Commenting on the research, Professor Balmford said: “We found that the external harms of high-yielding systems quite often turned out to be much lower than those of more extensive systems, such as organic farming. In terms of nitrogen and phosphate losses, from different dairy systems, for example, the difference was a factor of two. So, if you want to reduce pollution, you should probably avoid organic milk.”
In a letter replying to the think-tank, BBC Head of Education Helen Foulkes thanked SSA for drawing these issues to the BBC’s attention, noting that “whilst we regularly review our content to ensure its fidelity and curriculum relevance, we always welcome expert feedback which holds us to account and helps us meet our editorial standards and obligations to young learners.”
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
Following an internal BBC review, cross-checked by independent education consultants, the letter from Ms Foulkes acknowledged that the statements highlighted by SSA were indeed “misleading” and “do no not reflect the changing science and views on the impacts of organic farming since this content was initially commissioned.”
Welcoming the changes made by the BBC to include more balanced and science-based information about different farming systems, the Science for Sustainable Agriculture advisory group issued the following statement:
These welcome changes by the BBC show just how important it is to stand up for the science. Many of us have warned for some time that we cannot afford to be complacent with something as fundamental as food security. The world needs to increase food supplies by 70% by 2050 to keep pace with a rapidly expanding global population, in the face of climate change and increasing pressure on the world’s finite natural resources.
The future for agriculture does not lie in turning back the clock, as some believe, but in embracing high-tech solutions, applying scientific data and evidence, and combining innovation with established best practice and knowledge from a range of farming systems. It is vital that future generations are guided by the science, not by outdated doctrine and ideology. That way we have the best chance of feeding an increasingly hungry, warming planet in the most sustainable way.
As well as removing these misleading statements from its educational material, we would urge the BBC to focus on some of the inspiring agricultural technologies – for example in genetics, digital science, precision engineering, AI and biologicals – which can help our farmers respond to the food security challenge while at the same time mitigating and adapting to climate change, protecting biodiversity and conserving precious natural resources.
A version of this article was originally posted at Science for Sustainable Agriculture and has been reposted here with permission. Any reposting should credit the original author and provide links to both the GLP and the original article. Find Science for Sustainable Agriculture on X @SciSustAg
I just used ChatGPT for the first time. Initially, I was concerned about my future as the chatbot near-instantaneously answered my queries on increasingly obscure terms from my field, genetics. Stumping the AI tool, however, took only about 10 minutes.
ChatGPT was released November 30, 2022, from OpenAI/Microsoft. “Chat Generative Pre-trained Transformer” is a little like Google on steroids. But after my brief encounter, I can’t help but wonder whether it can handle the nuance, context, humor, and creativity of a human mind. Could it replace me as a textbook author?
My career
I’ve been writing life science tomes for a long time. My favorite has always been Human Genetics: Concepts and Applications, the first edition published in 1994, at the dawn of the human genome sequencing era. The 14th edition published this week, from McGraw-Hill. A revision takes two years, one for updating and addressing reviewers’ suggestions, another for “production,” from copyediting through final pages. Then, a year off.
As genetics morphed into genomics, artificial intelligence stepped in, layering the combinatorial information of comparative genomics onto DNA sequences. Training on data sets and then searching for patterns could be used to deduce evolutionary trees depicting species relationships, in ancestry testing and forensics, and in identifying sequences of mutations that appear as a cancer spreads.
ChatGPT is too recent for me to have used it in revising the new edition, but I’m curious now. I could imagine it spitting out definitions, but a textbook is much more than “content.” A human author adds perspective, experience, and perhaps knowledge beyond what ChatGPT can extract from the Internet.
Genetics textbooks and generative artificial intelligence
Genetic research unfurls and extracts reams of information, millions and even billions of data points. Train an algorithm on the DNA sequences of a known disease-causing gene, then search for identical or highly similar sequences in cells from other individuals to assist diagnosis.
The type of AI that could rewrite textbooks is called generative, the G in GPT. It learns the patterns and structures of “training” data and generates similar combinations of new data, producing text, images, or other media.
So could ChatGPT write a textbook like mine? I don’t think so.
I can imagine generative AI writing a novel similar to those of popular author Colleen Hoover. In fact, I long ago published a fantasy piece in Playgirl after listing the words and phrases in similar articles and creating a fresh scenario. It involved a tornado and a wheelbarrow in rural Indiana.
Like Colleen Hoover’s fiction and Playgirl fantasies, textbooks have a highly distinctive style. But textbooks have a lot more than a single narrative per chapter. The process also entails selecting photos, designing illustrations, and creating the pedagogical tools – questions, summaries, references, boxed readings. So here’s a brief history of textbook publishing, followed by what I suspect AI might not be able to execute as well as a human author could.
The evolution of a life science textbook
My first textbook, Life, was introductory biology, not to be confused with Keith Richards’ autobiography with the same title. Back then, sales reps were armed with bells and whistles to boost market share – test banks, instructor’s manuals, case workbooks. What a difference today! E-textbooks are embedded with “adaptive learning” multiple choice questions. The learner (once called a student) receives instant feedback on why each incorrect choice is incorrect.
Each edition brought new fonts, design elements, and palettes, to make the new version appear different, because some topics – mitosis, Mendel, DNA – don’t change. Tomes were split into shorter versions, like calving icebergs.
The first E-books date from the 1990s. Now, college tuition includes a fee to license an e-textbook for a limited time. “E” might also stand for “ephemeral.”
New for my 14th edition, a company intensely scrutinized everything I wrote with a DEI lens – diversity, equity, and inclusivity. I recounted discovering my gaffes here.
Because traditional textbooks are costly, occasionally efforts arise to replace them. But cobbling together a course from online materials, or from lecture notes and test questions, takes more time and effort than most would-be authors may realize. And the free online textbooks that appeared a few years ago lacked the editorial and reviewer scrutiny that an academic publisher provides.
A less tangible skill of creating a textbook is innate writing talent. The elements of style are subtle. How many academics, or ChatGPT, change passive to active voice? Rewrite to omit “there are” and other redundancies? Avoid overusing words? Alter sentence and paragraph lengths? Organize the material into logical A, B, and C heads?
Can AI mimic the creativity of a human mind?
AI may quickly assemble a table listing DNA replication enzymes or compile technology timelines. But how might an algorithm, no matter how well-trained, mimic an author’s choice of examples or develop case studies based on in-person interviews with people who have genetic diseases?
Would AI improve upon my stem cell similes?
The difference between a stem cell and a progenitor cell is a little like a college freshman’s consideration of many majors, compared to a junior’s more narrowed focus in selecting courses. Reprogramming a cell is like a senior in college deciding to change major. A French major wanting to become an engineer would have to start over, taking very different courses. But a biology major wanting to become a chemistry major would not need to start from scratch because many of the same courses apply to both majors. So it is for stem cells.
What about pedagogy? AI could regurgitate fill-in-the-blank or multiple choice questions. But could it create my critical thinking exercise of Venn diagrams of three SARS-CoV-2 variants with a few shared mutations? I ask the reader to apply the genetic code rules to predict which changes are most likely to threaten public health.
I try to make my questions fun.
Would ChatGPT come up with end-of-chapter queries based on the inheritance of wiry hair among the tribbles of Star Trek? Trace a rare blood type in a family on General Hospital? Create a pedigree for SORAS (soap opera rapid aging syndrome), the condition that permeates the Newman family on The Young and the Restless?
Borrowing from science fiction continues in an evolution chapter, asking the learner to identify the principle that these themes illustrate:
In Seveneves by Neal Stephenson, the moon shatters. In two years the pieces will smash into the Earth and make the planet uninhabitable for centuries. Some people already living on huge space stations survive, and others are selected to join them. Everyone else dies under the barrage of moon junk and intense heat. Above, on the “Cloud Ark,” the human species dwindles, but eventually resurges from seven surviving women, with help from assisted reproductive technologies to make babies. Five thousand years after the moon blows up, the human population, ready to inhabit a healed Earth, has resurged to 3 billion. (A POPULATION BOTTLENECK)
In Children of the Damned, all of the women in a small town are suddenly impregnated by genetically identical beings from another planet. (NON-RANDOM MATING)
In Dean Koontzi’s The Taking, giant mutant fungi kill nearly everyone, sparing only young children and the few adults who protect them. The human race must reestablish itself from the survivors. (FOUNDER EFFECT)
What about history? AI might easily assemble chronologies. But would it combine the 1961 deciphering of the genetic code by Marshall Nirenberg and Heinrich Matthaei in a “Glimpse of History” boxed reading with Katalin Karikó and Drew Weissman’s invention of the first mRNA-based vaccine?
Finally, can AI use humor? Would it deliver an end-of-chapter question like this one on forensic DNA testing:
Rufus the cat was discovered in a trash can by his owners, his body covered in cuts and bite marks and bits of gray furclinging to his claws—gray fur that looked a lot like the coat of Killer, the huge hound next door.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
It earns an A+ in returning definitions of obscure technical terms, like tetrachromacy (enhanced color vision from a fourth type of cone cell) and chromothripsis (shattered chromosomes).
ChatGPT accurately distinguishes gene therapy from gene editing. The tool doesn’t oversimplify gene therapy as “replacing” a gene, but returns “Gene therapy involves introducing new or modified genes into a person’s cells to correct or replace a faulty gene or to provide a therapeutic function.” That definition covers all bases.
My new textbook edition has a boxed reading on how the viruses behind flu and COVID differ. Again, ChatGPT returns more than I’d want to know about the two pathogens, comparing and contrasting. I could envision a student using the response in an assignment – I’m glad my professor days are over!
ChatGPT clearly distinguishes driver and passenger mutations in cancer, although my textbook definition begins with context:
A driver of a vehicle takes it to the destination; a passenger goes along for the ride.
A disclaimer for ChatGPT reads “While we have safeguards, ChatGPT may give you inaccurate information.” Apparently it also makes errors of omission, which I discovered when I asked it about something else I’d written about: how to make an Impossible Burger. Not simply the ingredients, but the biotechnology behind this brilliant invention.
ChatGPT’s explanation starts accurately enough:
The Impossible Burger is a plant-based burger patty designed to mimic the taste and texture of traditional beef burgers. While I don’t have access to the exact recipe or process used by Impossible Foods, I can provide a general overview of how plant-based burgers like the Impossible Burger are typically made.
It then lists general steps of making the variations on the traditional veggie burger theme found in supermarkets. But Impossible burgers are not at all like others!
That’s just not good enough, despite the humanizing effect of the first person answer.
ChatGPT apparently didn’t read my article, Anatomy of an Impossible Burger, which I posted here at DNA Science in May 2019. That’s about as straightforward a headline as I could come up with.
My source? The Patent and Trademark Office database! It took only a few minutes of searching. The patent application is 52 pages, filed in 2017 following years of published research. It includes hundreds of related patents and publications, many in mainstream media. ChatGPT couldn’t find any?
The tool had no access to the exact recipe or process? The approach genetically alters yeast cells to produce a soybean’s version of hemoglobin, called leghemoglobin, which imparts the “mouthfeel” and appearance of dead cow flesh.
Not only did I blog about the Impossible Burger in 2019, but I published a version in the thirteenth edition of my textbook, three years ago!
But I’m relieved, not insulted at flying under the ChatGPT radar, for it’s nice to know that my skill set is not yet obsolete. Although I do have an issue with typing ChatGPT. In earlier drafts it repeatedly came out of my brain as GTP, perhaps after the DNA nucleotide GTP – guanosine triphosphate.
Ricki Lewis has a PhD in genetics and is a science writer and author of several human genetics books. She is an adjunct professor for the Alden March Bioethics Institute at Albany Medical College. Follow her at her website or X @rickilewis
A version of this article was originally posted at PLOS and has been reposted here with permission. Any reposting should credit the original author and provide links to both the GLP and the original article. Find PLOS on X@PLOS
In Part 1 of this series, we showed that pesticide use has decreased dramatically per unit of food and fiber produced, as well as decreasing on a per capita basis (per person fed), even as it has only slightly decreased in use per acre. This is because the amount of food that each acre produces (its yield) has steadily increased since 1985.
Although pesticide toxicity continues to be a concern for many consumers, commonly-used pesticides have decreased in toxicity and improved in biodegradability over the years. However, debates concerning the safety of pesticides typically focus on whether or not to use pesticides at all, presenting organic and conventional as the two opposing sides, even though both approaches to farming use pesticides. They typically do not include nuanced comparisons of how pesticides have changed over time and how organic and conventional pesticides compare on common measures of toxicity and biodegradability. This post compares many of the most commonly-used pesticides to show where improvements have been made and where future improvements are still needed.
2: How have pesticides improved over the last 40 years?
Overall, fewer and fewer highly toxic pesticides are being used on crops. The EPA categorizes pesticides as highly toxic (as toxic as the nicotine used in e-cigarettes), moderately toxic (as toxic as the caffeine in coffee), slightly toxic (as toxic as the vanillin in vanilla beans), or practically non-toxic. Since 1991, the percentage of pesticides used on Washington apples that qualify as highly toxic has dropped from 10% to 0.2% in 2013 (data from the USDA). Since 1990, the percentage of non-sulfur pesticides used on California premium wine grapes that qualify as moderately toxic has dropped from 50% to less than 10%.
The USDA has also calculated how pesticide quality has improved since the 1960’s by evaluating toxicity (inverse of the water quality threshold), potency (measured as the inverse of the application rate per crop year), and persistence (half-life) of many common pesticides*. Their calculations show that pesticides have become less toxic, more potent (leading to lower application rates), and less persistent in the environment.
Clearly, pesticides in general have improved over time. But what about specific, commonly-used pesticides? There is a lot of variability between various pesticides, even those commonly used today. Let’s take a closer look at the top five most commonly-used pesticides in conventional and organic farming.
The following graph includes the five most commonly-used synthetic pesticides from 2016 as compared to the most common organic pesticides in 2016 as well as the most common synthetic pesticides from 1976. One way to measure the toxicity of chemicals is to measure the chronic no-observed effect level (NOEL). This value is the highest dosage level at which chronic exposure to the substance shows no adverse effects. This means that a high chronic NOEL represents a low-toxicity substance – the higher the number, the lower the toxicity. For the purpose of this graph, we subtracted the chronic NOEL from 1000 so that a high value would represent a high-toxicity substance.
Chronic NOEL values for the most-used organic pesticides are not available because they are considered very low toxicity and generally considered safe based on their normal existence in the environment. The chronic toxicity of synthetic pesticides have improved since 1976, especially with the increased use of glyphosate, which is one of the least toxic pesticides used.
The biodegradability of pesticides is also an important factor when attempting to assess possible negative effects of pesticide use. One way to measure this is to calculate the soil half-life: the time it takes for a pesticide to reduce by half in soil. When pesticides have a short half-life, this means they are quickly degraded and do not persist in the environment very long. This typically means they have less of a negative impact on the environment. *A soil half-life is not available for calcium hydroxide because it is used as a soil conditioner and therefore is viewed as a benefit to the soil when applied.
In general, pesticide soil half-lives have shortened over time. The most commonly-used organic pesticides have slightly shorter half-lives than synthetic pesticides. The reduction of the use of toxaphene (it was banned in 1990), which has an extremely long half-life, has made room for other pesticides with much shorter half-lives.
Another way to represent the environmental safety of a pesticide is the Environmental Impact Quotient (EIQ), which takes into account the soil half-life as well as the toxicity to different types of animals including birds, bees, and fish. Although some have criticized the EIQ and suggest other methods for estimating environmental risk of pesticides, the EIQ provides a single value to compare the environmental impact of various pesticides. It is helpful in this case to get a general sense of the influence of pesticides on the environment over the last few decades. Learn more about the Environmental Impact Quotient at the Cornell College of Agriculture and Life Sciences website. *EIQ values for toxaphene and propachlor are not available because toxaphene was banned in the U.S. in 1991 (before the EIQ was developed) and Monsanto discontinued manufacturing propachlor in 1998. An EIQ value for calcium hydroxide is also not available because some of the values needed to calculate the EIQ are not available, including chronic toxicity, as mentioned earlier.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
The environmental impact has decreased as newer pesticides are used that have lower toxicity and lower use rates.
Many arguments surrounding pesticide use do not take into account the changes and improvements in pesticide toxicity over time or include a risk comparison to give context to possible risks. For example, if one pesticide is banned, what will take its place? Will a more toxic pesticide be used instead? When making decisions about the use of pesticides, it is always necessary to compare the various risks. How can we increase crop yields to meet a growing population while minimizing risk from pesticide toxicity and environmental impact? Consumers regularly purchase products that carry what are considered moderate risks, including caffeine and vanillin. Therefore, it is important to give context to the possible risks of pesticides and balance risks with the benefits that pesticides provide. Even though pesticide safety has improved dramatically over the last 40 years, there are still concerns surrounding specific pesticides, especially glyphosate. See the next post in this series to learn about this contentious chemical.
Kayleen Schreiber is the GLP’s infographics and data visualization specialist. She researched and authored this series as well as creating the figures, graphs, and illustrations. Follow her at her website, on Twitter @ksphd or on Instagram @ksphd
Marc Brazeau is the GLP’s senior contributing writer focusing on agricultural biotechnology. He also is the editor of Food and Farm Discussion Lab. Marc served as project editor and assistant researcher on this series. Follow him on Twitter @eatcookwrite
A version of this article first ran on GLP on January 23, 2019.
“The promise of genetic testing is that it can tell you more about the way you’re built so that you can tailor your lifestyle to fit your biology,” wrote Anne Machalinksi, an ex-marathon runner and mom of three kids, in an article in Self magazine a few years ago
Anne is a true believer. Since becoming a mother she had gotten out of shape, going from running a 5-minute mile to chugging her way around the track. Her exercise routine and diet plan were not working. She tried lifting weights to increasing vegetable intake to cutting down on alcohol, and experimented with various crash diet plans, all to no avail.
In frustration, Anne tried something that is becoming increasingly popular among the scientifically inclined amongst us; she got a DNA test, with a UK-based online company FitnessGenes, to come up with a tailored fitness and health plan.
FitnessGenes mapped 42 of her genes and their variants, known as alleles. According to the company, the tests revealed that her personal choice for weight loss—training for a marathon—was a bad one.
Because I have two copies of what some call a “sprint” allele on my ACTN3 gene, I can produce a protein found in fast-twitch muscle fibers. I also have two copies of an allele associated with power and strength on my ACE gene. Combined together, … these findings mean that I’m naturally fast and strong, with muscles that recover quickly after a workout. If I want to decrease my body fat, [I was instructed], I should drastically cut down on long, slow endurance runs, which likely blunt my body’s ability to use fat as an energy source. Instead, I should focus on getting in about five high-intensity, low volume strength and interval workouts per week.
Anne was an early adaptor of what its proponents believe is a life-changing technology.
“Developing personalized diet and exercise plans could well be one of the most important fitness revolutions of the 21st century,” said Dan Reardon, CEO and co-founder of FitnessGenes.
Is that science speaking or the dubious plug of a still-unproven technology?
What’s the science behind the promise?
Personal DNA plans are the promised land to treat many illnesses. Two decades after the Human Genome Project was completed, the market for over-the-counter DNA based products is booming. One of the hot categories has become known as “nutrigenomics” — the promise of better understanding how food and exercise can modify our predispositions to disease and immune functions.
Nutrigenomics, as its proponents call it, is a booming industry. Testing companies brought in more than $170 million in revenue in 2018, according to Global Market Insights, driven in large part by direct-to-consumer genetic-testing companies including 23andMe and Ancestry.
There are now dozens of niche companies making nutrition-based claims, from modes to hyperbolic. DNAWeekly, a blog put together a list of what it claims are 10 of the best known companies pushing gene-based weight loss strategies. A company called My Toolbox Genomics promises the moon:
Nutritional test screens over 50 different genes to tell you everything you need to know about your diet. Our report showed which foods our test taker should be adding to or removing from their diet, not only to lose weight but also to increase overall health.
Customers are clicking away in massive of numbers. April Summerford, a women’s fitness coach, spent hundreds of dollars on at-home DNA tests to curate the perfect diet plan for herself, and is a true believer.
“I’ve been able to biohack my way to feeling better through what, I think, is the future of wellness,” said April, who believes that a broad plan for good health is not effective anymore and a curated personalized map is the solution.
And yet the accuracy of the promised claims and gains remains a grey area.
Nature v Nurture
As any geneticist would tell you, we are a product of nature and nurture. Genetics alone do not determine our traits as the environment around us plays a vital role in regulating our genetic expression. For example, genetic traits such as height or intelligence might be inherited but environmental factors such as malnourishment during pregnancy or after and harsh living conditions impact outcome. Understanding these external impacts on our genetic expression can provide us with tools to control the negative impact and boost the positive.
Looking at the specific relationship between nutrition and genetics, the genetic variability within us can be translated into the differences we may have in nutritional processes such as metabolism, absorption and excretion. And these differences then contribute to our nutritional requirements.
For example, a person who suffers from celiac disease—a hereditary disorder —cannot consume gluten (a common protein found in wheat, rye and barley) without triggering a serious auto-immune response. This is because, for them, gluten causes the body to attack the small intestines which then interferes with proper nutrient absorption in the body.
For a person living with celiac disease, awareness of nutrition and how specific food choices can impact their health is crucial in minimizing long term health problems. With one in 100 people worldwide living with celiac disease, understanding such intricacies between what we eat has become even more important.
In this broad context, personalized medicine (which is not the same as an at-home DNA nutritional test) can be extremely helpful.
“It embraces this idea that despite all of us being 99.9% the same, there is that 0.1% that truly determines how you respond to the world around you,” notesDr. Yael Joffe, founder and chief science officer of 3X4 Genetics.
As he correctly notes, it’s the 0.1% variation between us that allows us to react differently to similar situations from a drug reaction to efficacy of medicine. The goal of personalized medicine, therefore, is to customize medical decisions for an individual person tailored to their needs to create better health outcomes.
The challenge is: How do you translate this factoid into a personalized nutritional plan? That’s a lot trickier.
Disease susceptibility linked to multiple genes and complex environmental factors
Companies such as Habit, DNAfit and Nutrigenomix all offer DNA-based nutritional recommendations using genetic testing to identify genes for weight loss, eating habits and more. They collect your DNA from a cheek swab or a saliva sample. They then generate a report that details which specific gene variations in that person are “associated” with celiac disease, sensitivity to fats, lactose intolerance and more. Next come recommendations on diet and lifestyle changes to improve health, supposedly gleaned from the DNA analysis.
“Associated” is the key word here, as critics of these tests point out. Genes are almost never cause-and-effect linked to an outcome, in the way, say, one gene is 100% predictive of getting Parkinson’s disease or specific genes are linked to breast or ovarian cancer. There are an estimated 7,300 so-called Mendelian single-gene linked disorders, which are rare and usually inherited.
So, a nutritional genomic test can be reliable if done to test monogenic diseases such as galactosemia that affects how the body processes a simple sugar called ‘galactose’. This sugar is commonly found in many foods and if the disease is left undiagnosed, a person can suffer from feeding difficulties, lack of energy, loss of weight and more serious complications. In cases such as this, a genetic test can be really valuable in diagnosing a disorder, offering predictive and accurate results.
There are many pitfalls t trying to analyze complex health factors and nutrition impacts using genetics. Many companies focus on macronutrients such as metabolism of lipids and carbohydrates. For that they use FTO (fat mass and obesity associated) genetic variants. The FTO gene variant is universally considered as one of the variants that has an association with our BMI.
However, outside of this one gene, nearly 900 other gene variants also exist that have a phenotypic association with BMI. One identified variant (or even a few) cannot explain a person’s genetic predisposition to weight loss.
But many DNA test companies try to circumvent the hard science by claiming to offer DNA-linked insights not just for monogenic but also for complex diseases such as obesity, which is likely caused but multiple genes and complex lifestyle factors.
For example, Personal Diagnostics, a UK based company supplies at-home DNA kits including an ‘obesity risk DNA test.’ In this test, the company analyzes 4 ‘fat genes’ to assess susceptibility to weight gain, influences on BMI and blood lipids. The test claims it encourages better management of diet through curated health insights which in turn will prevent diseases such as diabetes, blood pressure and cardiovascular diseases linked with obesity. But the evidence for such grandiose claims is not clear.
“There is convincing evidence that common diet-related diseases are influenced by genetic factors, but knowledge in this area is fragmentary and few relationships have been tested for causality,” wrote Ulf Görman in 2013, examining the ethics behind nutritional based genetic testing. “The evidence that genotype-based dietary advice will motivate appropriate behavior changes is also limited.”
Genetic astrology: Upstart claims downplay their claims in small print
With the science behind the nutritional DNA testing still new and somewhat shaky, too much information and scientific jargon can confuse customers. While building up consumer expectations of discovering a DNA solution to their health concerns, companies often release complex or hazy information, leaving customers to try and figure out what is science and what is snake oil.
Concerns about the proliferation of junk science and over promising caught the eye of the FDA 10 years ago in a challenge to 23andMe, now the largest global direct-to-consumer (DTC) DNA testing company. The FDA was concerned that a test could give a false-positive or a false-negative based on a disease-linked association, prompting a patient to self-diagnose or treat herself, which in turn could create a bigger health risk.
“A direct-to-consumer test result may be used by a patient to self-manage, serious concerns are raised if test results are not adequately understood by patients or if incorrect test results are reported, the FDA highlighted when it temporarily banned 23andMe sales in 2012 in the US.
Since then, DTC companies have been able to protect themselves from liability for claims but adding disclaimers, such as the one displayed by DNAfit for its $471 Circle Premium test.
In other words, buyer beware; this DNA-based advice could very well be of no value and may even provide misinformation.
Most mainstream scientists are dubious about nutrigenomics
Most scientists are not yet convinced the technology is ready for prime time.
“For complex traits such as diabetes, coronary artery disease or obesity where there are multiple genetic and environmental factors at play, genetic testing is less useful,” said Heidi Rehm, chief genomics officer at the Center for Genomic Medicine at Massachusetts General Hospital.
Understanding complex disorders is just not that simple. Even with a scientific background, looking at the data released by DTC DNA companies can still be vague at best. According to David Mutch, nutritional biochemist at the University of Guelph in Canada:
Scientists working in the field, such as myself, would struggle [to evaluate the evidence behind personalized nutrition companies’ products] because it’s really a bit of a black box, and even for those companies that disclose what exactly they test for, you’re still not entirely sure what science was used to get those particular variants onto their panel of variants that they’re testing for.
Many scientists liken nutrition-based DTC genomics to little more than ‘genetic astrology’. Fine print warnings that the data provided is not particularly reliable nor reassuring.
“I think it’s unusual for us to think of scientific work that doesn’t have legal and scientific standing, with the idea that we need to ‘take this science with a grain of salt,’” says Jonathan Marks, a professor of anthropology at the University of North Carolina, Charlotte. “This probably the the wrong direction for the scientific community to be leading the public in, if we’re apprehensive about people not taking science seriously enough.”
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
23 geneticists from around the world wrote a consensus statement in 2016 in the British Journal of Sports Medicine, concluding that DTC DNA tests could not provide science-based guidance. Its key conclusion: Although this field has grown tremendously in recent years, the science is still in its earliest stages, and at the moment, tests relying on it hold no value.
One of the world’s experts in this field, Claude Bouchard, director of the human genomics laboratory at Pennington Biomedical Research Center in Baton Rouge, has been unequivocal in his assessment of the claims emanating from this nascent field. After 40 years of research in this area, he is optimistic about the future but not so sanguine on the present.
“We’ve made a lot of progress in the last few decades,” Bouchard has said. But as of today? “When it comes to these current genetic tests for fitness and performance, they have almost zero predictive power.”
For an emerging science, nutrigenomics holds promise, and with time could yield insights that could be applied in our daily lives. But for now, much like crash diets and other fads, it is a closer to being a money-making, marketing gimmick—a fool’s gold that a simple DNA test can tell you what to eat and how to lose weight, and like magic it will work. That’s just not the case.
Mariam Sajid has a Master’s degree from University of Nottingham, UK in molecular genetics and diagnostics and is passionate about improving medical communications that effectively translate scientific advancements to the public. Currently based in Pakistan, she works with digital healthcare companies in developing disease elimination programs for infectious diseases such as HIV and Hepatitis in lower-middle income countries. Follow her on twitter @mariamsajid00
This article previously appeared on the GLP on May 18, 2021.
What I love most about science in general, and genetics in particular, is when new findings upend everything we thought we knew about something. That was so in 1977, when “intervening DNA sequences” – aka “introns” – were discovered to interrupt protein-encoding genes.
Sometimes, we discover new ways that organisms do things. Changing gene expression – the set of genes that are transcribed into mRNA and then translated into proteins under a particular circumstance – is how organisms rapidly respond to a challenge. For an octopus, that might be a sudden plunge in water temperature, which slows enzyme activity.
But some species control genetic responses another way – via RNA editing. Changes in one of the four types of nitrogenous bases of an mRNA alter the encoded protein in ways that alter the protein’s function.
In a new report inCell, Joshua Rosenthal of the Marine Biological Laboratory at Woods’ Hole and Eli Eisenberg at Tel Aviv University describe how the cephalopods – octopi, squid, and cuttlefish – change mRNAs in ways that alter enzymes. Because the edits are in RNA, and not DNA, they are fleeting. “We’re used to thinking all living things are preprogrammed from birth with a certain set of instructions. The idea the environment can influence that genetic information, as we’ve shown in cephalopods, is a new concept,” said Rosenthal.
In 2015, the researchers discovered that about 60 percent of the mRNAs in the common squid Doryteuthis pealeii are edited. The squid is a classic model organism for nervous system studies. The new study reveals the phenomenon in the California two-spot octopus.
When “gene editing” makes headlines, it’s typically in the context of using CRISPR and other DNA-cutting enzymes. In the cephalopods, it’s different. An adenine (A) in mRNA is replaced with an inosine, a base that interacts with others as if it is instead a guanine (G). This “A-to-I RNA editing” can introduce a single-base mutation, but an ephemeral one because it is in RNA. And if it occurs in a protein in a way that alters its function, the change affects the phenotype – such as altered enzyme activity that enhances survival under trying conditions.
Although A-to-I editing occurs in the genomes of most animal species, in the cephalopods, it is especially focused on protein-encoding genes. “A big question for us was, ‘What are they using it for?’” Rosenthal posed.
Perhaps RNA editing lies behind some aspects of the cephalopods’ complex behaviors. Because RNA is short-lived and the edits do not persist, the investigators looked to a temporary response, acclimatization to a changing environment – temperature. That matters because it governs the activity of enzymes, which in turn drive chemical reactions crucial to all physiological processes.
Like other cephalopods, the California two-spot octopus (Octopus bimaculoides) they studied cannot generate its own body heat to counteract the temperature drops that accompany tides, changes in water depth, and seasons.
The team observed responses in octopi in tanks at the Marine Biological Lab as well as in animals collected from octopus dens in the wild. When living in cold water tanks (13 degrees C/55 degrees F), the animals showed an increase in 13,285 mRNAs where the gene edits alter the encoded proteins’ functions. This might actually be an underestimate, the researchers write, because they focused on known sites of RNA editing that affect the complex nervous system. But when the octopi were moved to warm tanks (22 degrees C/72 degrees F), the number of edited RNA transcripts plummeted to just 550.
To complement the lab work, Matthew Birk, now at Saint Francis University in Pennsylvania, recorded temperatures near octopus dens in winter and late summer, then collected the animals and catalogued their RNA edits. Results from lab and nature agreed.
The gene edits affect the nervous system, which makes sense given the repertoire of complex behaviors for which the octopus is famous. My Octopus Teacher, a documentary film from 2020, illustrated the animal’s brain power in the true tale of Craig Foster, who followed and bonded with an octopus in a cold kelp forest near Cape Town. Foster trailed the young cephalopod for a year, who was clearly at ease with her human friend, providing an unprecedented view of how she lives.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
The researchers and colleagues scrutinized two examples of RNA editing overseeing nervous system function. Specifically, they discovered a strong effect on two such proteins.
RNA editing speeds the action of kinesin-1, a protein that affects axonal transport – ferrying molecules along the main outgrowths of a neuron. A one-base RNA edit also alters responsiveness of synaptotagmin, which fosters neuron-to-neuron communication. The changes likely happen with gradual temperature shifts, the researchers write.
Rosenthal suspects that RNA editing in these two proteins is just “the tip of the iceberg,” and is likely behind the cephalopod’s superior brains that can oversee spectacular and sudden camouflage, communicate, and even solve puzzles.
The discovery and description of the elegant RNA editing of the octopus provides a compelling example of the fallacy of the widely-held idea of “scientific proof.” The beauty of science, to me, is that new evidence continually alters what we thought we knew.
Ricki Lewis has a PhD in genetics and is a science writer and author of several human genetics books. She is an adjunct professor for the Alden March Bioethics Institute at Albany Medical College. Follow her at her website or X @rickilewis
A version of this article was originally posted at PLOS and has been reposted here with permission. Any reposting should credit the original author and provide links to both the GLP and the original article. Find PLOS on X @PLOS
Used for the temporary relief of stuffy nose, sinus, and ear symptoms caused by the common cold, flu, and allergies, phenylephrine is the active ingredient in Sudafed PE and some versions of Mucinex, Dayquil, Tylenol Sinus, and Advil Sinus Congestion, as well as store brands based on the same formulations. Considering that individuals in the United States are estimated to suffer from more than one billion colds every year, and a sizable most of them oral pill or another, the decision will upend the cold therapy market.
Phenylephrine has been approved by the Food and Drug Administration for more than 45 years. In 2006, it became the main ingredient in over-the-counter decongestants after an FDA law required pseudoephedrine, an even older approved ingredient, to be moved behind counters because it could be illegally processed into methamphetamine.
The original Sudafed tablet with pseudoephedrine remains available behind-the-counter with the “PE” designation. Also an ingredient in nasal sprays to treat congestion, phenylephrine will not have to be removed from these formulations.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
The vote was inevitable in view of the findings of a briefing document prepared for the committee by the FDA. It found that the oral bioavailability (the extent to which a medication can be used by the body) of the drug is less than 1%, not the 38% often cited in the literature. A half century ago, the FDA set different, more stringent, breathing standards than today.
The review also found that the original studies that purportedly supported the efficacy of phenylephrine contained methodological, statistical, and data integrity flaws. According to the Yale School of Medicine, “six of the seven studies considered in the 1970s were submitted by Sterling-Winthrop, one of phenylephrine’s manufacturers, which may have played a role in influencing the original panel’s decision.”
Moreover, the drug is not without significant side effects, including mild upset stomach, trouble sleeping, dizziness, lightheadedness, headache, nervousness, shaking, and rapid heartbeat. At higher doses, it can increase blood pressure.
The NDAC verdict is a blockbuster finding because of the popularity of phenylephrine-containing products. According to the FDA briefing document, 242 millionpackages or bottles of phenylephrine products were sold in 2022, which resulted in $1.76 billion in sales.
Presumably, many of those purchases were by repeat users who thought that the products offered relief from their symptoms, so what’s going on here?
Placebo effect?
There are two phenomena that are well-known by physicians that can explain this.
The first is the placebo effect, the ability of an inactive treatment, such as the proverbial “sugar pill,” to alleviate a pain or a sign or symptom of illness. A placebo can induce measurable physiological changes like those observed in subjects taking effective medications. Physical signs that have been shown to improve after “treatment” with a placebo include changes in blood pressure, heart rate, and even various blood test results. Symptoms especially amenable to improvement with a placebo include pain management, stress-related insomnia, and fatigue and nausea from cancer treatment.
Thus, the theory would be that you take a phenylephrine pill which is little more than a placebo, and the illusion of taking an effective medicine makes you believe your symptoms improved. Scientists believe that goes on with other, popular pain-relieving medicines.
A 2020 review published in BMJ that examined data from more than 140,000 patients with various chronic pain conditions found placebo responses could account for as much as 75 percent of the benefits of even effective drugs.
Post hoc, ergo propter hoc fallacy?
The second possible explanation has a more complicated name but is a simple phenomenon. It’s called the post hoc, ergo propter hoc (after this, therefore because of this) fallacy, which mistakenly links two events as cause and effect because one happens after the other.
Let us posit, as the FDA and its advisory committee did, that phenylephrine in pill form is inactive, yet many patients with congestion still take it several times a day for a couple of days. Many of them say they feel better. Inasmuch as it’s used to treat symptoms caused by illnesses like colds, flu, and allergies that resolve on their own without treatment, it’s not surprising that people think, “I took the drug and felt better. The drug worked.”
There is another phenomenon that is another manifestation of the post hoc, ergo propter hoc fallacy: believing, falsely, that there was a bad outcome from a drug. I had a personal near-example of it many years ago. I was a medical resident at a major cancer center. As part of a treatment protocol, I was charged with giving a 4 a.m. intravenous dose of an experimental drug. I was groggy from sleep and had trouble getting all the bubbles out of the syringe. So, there I was, flicking and tapping to get the last bubble out before injecting it, when the patient expired — just stopped breathing and died. (There was a “do not resuscitate” order, so that was that.) It was 4:01 a.m.
If I had administered the drug exactly on time, the patient would have died within seconds of receiving it, and the investigators on the protocol, the maker of the drug, federal regulators and I would all have assumed, incorrectly, that the drug was the cause of death.
There is a saying in medical research that the plural of anecdote is not data — in other words, a single example should never be used to extrapolate a broader rule about, well, anything That is why we do (or should do!) rigorous clinical trials to test the safety and efficacy of new drugs and other interventions. Had such studies been done properly in the first place with phenylephrine, Americans would have avoided a few side effects and saved lots of money.
Henry I. Miller, a physician and molecular biologist, is the Glenn Swogger Distinguished Fellow at the American Council on Science and Health. He was the founding director of the FDA’s Office of Biotechnology. Find Henry on X @HenryIMiller
We’ve all heard of the five tastes our tongues can detect — sweet, sour, bitter, savory-umami and salty. But the real number is actually six, because we have two separate salt-taste systems. One of them detects the attractive, relatively low levels of salt that make potato chips taste delicious. The other one registers high levels of salt — enough to make overly salted food offensive and deter overconsumption.
Exactly how our taste buds sense the two kinds of saltiness is a mystery that’s taken some 40 years of scientific inquiry to unravel, and researchers haven’t solved all the details yet. In fact, the more they look at salt sensation, the weirder it gets.
Many other details of taste have been worked out over the past 25 years. For sweet, bitter and umami, it’s known that molecular receptors on certain taste bud cells recognize the food molecules and, when activated, kick off a series of events that ultimately sends signals to the brain.
In the case of salt, scientists understand many details about the low-salt receptor, but a complete description of the high-salt receptor has lagged, as has an understanding of which taste bud cells host each detector.
“There are a lot of gaps still in our knowledge — especially salt taste. I would call it one of the biggest gaps,” says Maik Behrens, a taste researcher at the Leibniz Institute for Food Systems Biology in Freising, Germany. “There are always missing pieces in the puzzle.”
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
Our dual perception of saltiness helps us to walk a tightrope between the two faces of sodium, an element that’s crucial for the function of muscles and nerves but dangerous in high quantities. To tightly control salt levels, the body manages the amount of sodium it lets out in urine, and controls how much comes in through the mouth.
“It’s the Goldilocks principle,” says Stephen Roper, a neuroscientist at the University of Miami Miller School of Medicine in Florida. “You don’t want too much; you don’t want too little; you want just the right amount.”
If an animal takes in too much salt, the body tries to compensate, holding on to water so the blood won’t be overly salty. In many people, that extra fluid volume raises blood pressure. The excess fluid puts strain on the arteries; over time, it can damage them and create the conditions for heart disease or stroke.
But some salt is necessary for body systems, for example to transmit electrical signals that underlie thoughts and sensations. Consequences of too little salt include muscle cramps and nausea — that’s why athletes chug Gatorade to replace the salt lost in sweat — and, if enough time passes, shock or death.
Scientists in search of salt taste receptors already knew that our bodies have special proteins that act as channels to allow sodium to cross nerve membranes for the purpose of sending nerve impulses. But the cells in our mouth, they reasoned, must have some additional, special way to respond to sodium in food.
A key clue to the mechanism came in the 1980s, when scientists experimented with a drug that prevents sodium from entering kidney cells. This drug, when applied to rats’ tongues, impeded their ability to detect salty stimuli. Kidney cells, it turns out, use a molecule called ENaC (pronounced “ee-nack”) to suck extra sodium from blood and help maintain proper blood salt levels. The finding suggested that salt-sensing taste bud cells used ENaC too.
To prove it, scientists engineered mice to lack the ENaC channel in their taste buds. These mice lost their normal preference for mildly salty solutions, the scientists reported in 2010 — confirming that ENaC was, indeed, the good-salt receptor.
So far, so good. But to truly understand how the good-salt taste worked, scientists would also need to know how the entry of sodium into taste buds is translated into a “Yum, salty!” sensation. “It’s what gets sent to the brain that’s important,” says Nick Ryba, a neuroscientist at the National Institute of Dental and Craniofacial Research in Bethesda, Maryland, who was involved in linking ENaC to salt taste.
And to understand that signal transmission, scientists needed to find where in the mouth the signal started.
The answer might seem obvious: The signal would start from the specific set of taste bud cells that contain ENaC and that are sensitive to tasty levels of sodium. But those cells didn’t prove simple to find. ENaC, it turns out, is made up of three different pieces, and although individual pieces are found in various places in the mouth scientists had a hard time finding cells containing all three.
In 2020, a team led by physiologist Akiyuki Taruno at the Kyoto Prefectural University of Medicine in Japan reported that they had identified the sodium-taste cells at last. The researchers started with the assumption that sodium-sensing cells would spark an electrical signal when salt was present, but not if the EnaC blocker was there too. They found just such a population of cells inside taste buds isolated from the middle of mouse tongues, and these turned out to make all three components of the ENaC sodium channel.
Scientists can thus now describe where and how animals perceive desirable levels of salt. When there are enough sodium ions outside those key taste bud cells in the mid-tongue area, the ions can enter these cells using the three-part ENaC gateway. This rebalances the sodium concentrations inside and outside the cells. But it also redistributes the levels of positive and negative charges across the cell’s membrane. This change activates an electrical signal inside the cell. The taste bud cell then sends the “Mmmm, salty!” message onward to the brain.
Too salty!
But this system doesn’t explain the “Blech, too much salt!” signal that people also can get, usually when we taste something that’s more than twice as salty as our blood. Here, the story is less clear.
The other component of salt — chloride — might be key, some studies suggest. Recall that salt’s chemical structure is sodium chloride, though when dissolved in water it separates into positively charged sodium ions and negatively charged chloride ions. Sodium chloride creates the saltiest high-salt sensation, while sodium paired with larger, multi-atom partners tastes less salty. This suggests that sodium’s partner might be an important contributor to the high-salt sensation, with some partners tasting saltier than others. But as to exactly how chloride might cause high-salt taste, “Nobody has a clue,” says Roper.
One hint came from work by Ryba and colleagues with an ingredient of mustard oil: In 2013, they reported that this component reduced the high-salt signal in mouse tongues. Weirdly, the same mustard-oil compound also nearly eliminated the tongue’s response to bitter taste, as if the high-salt-sensing system was piggybacking onto the bitter-tasting system.
And it got odder still: Sour-taste cells seemed to respond to high salt levels, too. Mice lacking one or the other of the bitter- or sour-taste systems were less put off by extremely salty water, while those lacking both happily slurped down the salty stuff.
Not all scientists are convinced, but the findings, if confirmed, raise an interesting question: Why don’t super-salty things taste bitter and sour too? It could be because the too-salty taste is the sum of multiple signals, not just one input, says Michael Gordon, a neuroscientist at the University of British Columbia in Vancouver, who coauthored, with Taruno, a discussion of the knowns and unknowns of salt taste in the 2023 Annual Review of Physiology.
Despite the mustard oil lead, attempts to find the receptor molecule responsible for the high-salt taste sensation have so far been inconclusive. In 2021, a Japanese team reported that cells containing TMC4 — a molecular channel that lets chloride ions into cells — generated signals when exposed to high levels of salt in lab dishes. But when the researchers engineered mice without the TMC4 channel anywhere in their bodies, it didn’t make much difference to their aversion to extremely salty water. “There’s no definitive answer at this point,” Gordon says.
As a further complication, there’s no way to be sure that mice perceive salty tastes in exactly the same way that people do. “Our knowledge of salt taste in humans is actually quite limited,” says Gordon. People can certainly distinguish desirable, lower-salt levels from the foul, high-salt sensation, and the same ENaC receptor used by mice seems to be involved. But studies with the ENaC sodium channel blocker in people vary confusingly, sometimes seeming to diminish salt taste but other times to enhance it.
A possible explanation is the fact that people have a fourth, extra piece of ENaC, called the delta subunit, that rodents lack. It can take the place of one of the other pieces, perhaps making a version of the channel that is less sensitive to the ENaC blocker.
Forty years into investigations of salt taste, researchers are still left with questions about how people’s tongues perceive salt and how the brain sorts those sensations into “just right” versus “too much” amounts. At stake is more than just satisfying a scientific curiosity: Given the cardiovascular risks that a high-salt diet poses to some of us, it’s important to understand the process.
Researchers even dream of developing better salt alternatives or enhancers that would create the “yum” without the health risks. But it’s clear they have more work to do before they invent something we can sprinkle on our dinner plates with abandon, free of health worries.
Amber Dance is an award-winning freelance science journalist based in Southern California. Amber has a doctorate in biology and she edits both nonfiction and fiction books and frequently speaks to groups about her transition from scientist to science writer. Follow Amber on X @amberldance
A version of this article was originally posted at Knowable and has been reposted here with permission. Any reposting should credit the original author and provide links to both the GLP and the original article. Find Knowable on X @KnowableMag
Why do some heavy smokers never get lung cancer? And why do some people who never smoke get lung cancer? Answers are emerging for both of those questions. In both cases, much depends on our individual genetic make-up.
Lung cancer is the second most common cancer worldwide, accounting for 2.2 million new casesand 1.8 million deaths in 2020. It is also the most commonly occurring cancer for which the major cause is both known and preventable. Yet there remain mysteries about causation of lung cancer. How do some heavy smokers manage to avoid lung cancer? And what accounts for the occurrence of lung cancer in people who have never smoked?
In a just-published study, researchers at the Albert Einstein College of Medicine in the Bronx have found that some smokers’ DNA appears to become accustomed to the cancer-causing agents in cigarettes. This may help prevent dangerous mutations that result in lung cancer.
“The heaviest smokers did not have the highest mutation burden,” lead study author Dr. Simon Spivack said in a statement. “Our data suggests these individuals may have survived for so long in spite of their heavy smoking because they managed to suppress further mutation accumulation. This leveling off of mutations could stem from these people having very proficient systems for repairing DNA damage or detoxifying cigarette smoke.”
While this explanation may account for one mystery, another remains: What about the hundreds of thousands of people throughout the world who get lung cancer every year but never so much as took a drag?
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
Cigarette smoking accounts for between 80 and 90 percent of lung cancer cases in the West. The vast majority of lung cancers in high-income countries could be prevented if all smokers gave up their habit. While this is not likely to happen, just noting this fact is a crucial starting point for any discussion of lung cancer.
The only other common non-skin cancer for which the predominant cause has been identified is cervical cancer, which is caused by the human papilloma virus (HPV), and which can be almost totally prevented by vaccination.
There are striking differences in the epidemiologic, clinical, and biological characteristics of lung cancer in different parts of the world. In the U.S., where nearly 240,000 cases of lung cancer are diagnosed each year and where there are 130,000 deaths annually from the disease, lung cancer rates are roughly comparable in men and women, and are decreasing in both sexes. In contrast, in China, lung cancer rates are increasing in both sexes, but are roughly twice as high in males compared to females.
While most lung cancer in the West is associated with smoking, worldwide, it has been estimated that 15 percent of men and 53 percent of all women with lung cancer worldwide are never smokers.
Lung cancer in never smokers (LCINS), which tends to be of the adenocarcinoma cell type, is found predominantly in women and in East Asians. In contrast, the most common cell types found in lung cancer occurring in smokers are squamous cell and small cell types.
When considered as a distinct disease entity, lung cancer in never smokers ranked as the 7th most common cause of cancer death and the 11th or 12th most common incident type of cancer.
For at least 40 years, we have tried to identify environmental risk factors that might explain the occurrence of lung cancer in people who have never smoked. Potential factors that have been studied include passive smoking; residential exposure to radon gas; exposure to cooking fumes from coal and other fuels (particularly, in low-income countries); general air pollution; pre-existing lung disease; hormonal/reproductive factors (that might help account for the more frequent occurrence in women who never smoked); and inherited susceptibility. Other potential risk factors include asbestos and oncogenic viruses.
Although numerous studies have examined these factors, they appear to have relatively weak effects and are unlikely to account for the majority of cases (here, here, and here).
Environment vs. genetics
A 2012 review of the epidemiology of lung cancer in never smokers concluded that, “In any event, a large fraction of lung cancers occurring in never smokers cannot be definitively associated with established environmental risk factors, highlighting the need for additional epidemiologic research in this area.”
If strong environmental risk factors that account for lung cancer in never smokers are lacking, research examining molecular markers and driver mutations has produced novel and potentially clinically actionable results. Current evidence indicates that LCINS is a distinct disease with unique molecular and genetic characteristics.
Cancer results from the binding of mutagens to the DNA of critical genes, including tumor suppressor genes, proto-oncogenes, and genes involved in DNA repair. If the damage is not repaired, the transformed cell can go on to produce a clone, which can go on to develop into a full-blown (i.e., clinical) cancer. Tobacco smoke contains more than 60 mutagens that bind to, and chemically modify, DNA, leaving a distinctive mutational imprint on the lung cancer genome.
However, identifying the specific mutations that account for the potent carcinogenic effect of smoking on lung cancer has proved a challenge. The recent study from Albert Einstein College of Medicine used a new method to identify mutations in the progenitor cells that give rise a cell type that is susceptible to lung cancer (basal lung epithelial cells).
The researchers examined normal lung tissue from 14 never smokers and 19 smokers. Only one of the former had lung cancer, whereas 13 of the latter had lung cancer. The number of mutations increased with age in both smokers and never smokers, and with increasing pack-years of smoking up to 23 pack-years among smokers, but with no further increase in heavy smokers. However, the one never smoker who developed lung cancer did not have more mutations in normal lung cells than the never smokers who were free of lung cancer.
Notably, the smokers who developed lung cancer did not have more mutations in their normal lung tissue than the smokers who did not develop lung cancer. Thus, it is not clear which mutations associated with smoking determine who goes on to develop lung cancer, or whether it is a matter of susceptibility factors or just bad luck.
Although smoking is a powerful risk factor for lung cancer, it’s also known that susceptibility genes also play a role in lung cancer, as well as in cancer generally. This is apparent from the fact that most smokers never develop lung cancer.
Why are Asians more vulnerable?
It’s been long recognized that the pattern of lung cancer in Asia differs from that seen in the West. Smoking rates have been much lower in Asian women compared to Asian men, and women tend to develop the adenocarcinoma cell type, which occurs in the periphery of the lung, as opposed to squamous cell and small cell lung cancer, which occur in the main bronchi.
In the early 2000s it was noted that the response to treatment with epidermal growth factor receptor tyrosine kinase (EGFR-TK) inhibitors, such as gefitinib and erlotinib, among patients with non-small cell lung cancer (NSCLC) was markedly more effective in never smokers than in smokers. The benefit of treatment included statistically significant increased response rates, longer interval to progression, and/or longer median overall survival. This improved clinical response was most evident in patients from East Asia, in women, and in cases with the adenocarcinoma cell type.
Thus, lung cancer in smokers and that occurring in never smokers, particularly in East Asian women, appear to present two contrasting facets of lung cancer. In the first case, a potent carcinogen has been identified, but the precise way in which it causes cancer is unclear. In the second instance, a driver mutation which leads to cancer has been identified in a large proportion of cases, but evidence for environmental triggers is either weak or lacking. (A driver mutation is a genetic alteration that is responsible for both the initiation and progression of cancer).
Other research has identified a number of striking differences in genomic signatures and driver mutations between lung cancers occurring in smokers versus in those who have never smoked. For example, mutations in the tumor suppressor gene TP53 are more common in lung cancers in smokers than in LCINS. In addition, mutations in the KRAS oncogene are also common in lung cancers occurring in smokers but are rare in LCINS (43% vs. 0%). Conversely, EGFR mutations are common in LCINS but rare in lung cancer occurring in smokers (in one large study: 54% vs. 16%).
In addition, next generation sequencing studies indicate that the total number of mutations involving genes in protein coding regions was significantly higher in smokers than in never smokers (median 209 vs. 18). This represents a 90 percent lower incidence of mutations in never smokers.
The smaller number of genetic alterations identified in lung cancer in never smokers suggests that the majority, if not all, may play a role in the carcinogenic process. For this reason, it has been suggested that lung cancer in never smokers may provide “a relatively enriched and easily identified set of oncogenic drivers for lung cancer.”
Two researchers involved the treatment of LCINS concluded their review as follows, “With the advances in sequencing technology and decreasing costs it is possible that, in the near future, advanced-stage LCINS may be primarily treated with molecularly targeted therapy, and it would be possible to achieve prolonged periods of disease control similar to the treatment of chronic myeloid leukemia (CML) and gastrointestinal stromal tumor (GIST).”
In spite of these advances, it must be emphasized that the landscape of mutations in lung cancer is complex, and there is a tendency for these cancers to develop resistance to the first-line targeted therapy. For this reason, intensive work is focused on new targeted treatments, combinations of several agents, and use of immunotherapies in addition.
Takeaways from new research on lung cancer in smokers and never smokers
First, epidemiologic studies investigating low-level, hard-to-measure, or subtle exposures, such as environmental tobacco smoke, radon exposure, and asbestos should focus on validated lifetime non-smokers, since smoking is such a powerful risk factor for lung cancer. (The risk of lung cancer posed by smoking is much stronger than that posed by asbestos).
Because so little is known about the causes of LCINS, there may be a tendency to overstate the importance of associations with potential risk factors that have been studied, rather than acknowledge that the findings of these studies are unlikely to account for a large proportion of LCINS.
In regard to passive smoking, a French study that examined major mutations associated with lung cancer in never smokers and smokers found no clear association between passive smoke exposure and a “smoker-like mutation profile” in lifelong, never-smokers with lung cancer. They concluded that, “Passive smoking alone appeared to be insufficient todetermine a somatic profile in lung cancer.”
Second, characterizing the common and divergent mechanisms of malignant transformation in lung cancer occurring among smokers and that in never smokers could contribute to a better understanding of the genomic changes underlying malignant transformation and progression. As one group of researchers wrote, “The mutually exclusive nature of certain mutations is a strong argument in favor of separate genetic paths to cancer for ever smokers and never smokers.”
Third, the difficulty of identifying major causal factors in LCINS reminds us, that for many cancers, in spite of fifty years of epidemiologic research, we still have not identified major causal factors (exposures) for many common cancers, which might lend themselves to prevention. This is true for colorectal cancer, breast cancer, pancreatic cancer, prostate cancer, brain cancer, leukemia, and others.
This, in turn, underscores how difficult it is to pinpoint external causes of cancers that in most cases take decades to develop. Smoking as a cause of lung cancer and human papillomavirus as a cause of cervical cancer are exceptions to be noted and appreciated.
That said, we are seeing that identifying driver mutations that give rise to a particular cancer can lead to the development of effective targeted therapies that can greatly extend survival. These therapies represent long-sought, dramatic progress in treating serious cancers. This progress is independent of identifying the causal factor responsible for the cancer.
Neonicotinoids, the world’s most popular class of insecticides, have been making headlines for the last decade due to concerns that they negatively impact bees. Now, a new debate has emerged in the fight over their use, pitting university entomologists against agricultural economists, the EPA against the USDA, and pesticide manufacturers against environmental activists. Farmers, as usual, are caught in the middle, trying to figure out how the conflicting narratives will impact them.
The question at hand: Do neonicotinoid seed treatments increase yields? If not, as some claim, reducing or eliminating their use in order to ease a potential stressor on bee colony health could be an easier decision for growers.
Nearly all corn and a majority of soybean seeds sold in the United States are coated with neonicotinoids, or “neonics”. Their main purpose is to fend off below-ground pests while the seeds are germinating, with the hope that the plant also absorbs some of the insecticide and expresses some of its pest-deterring effects as it’s growing. Neonic seed coatings are often preferred to spraying and the use of other insecticides because they’re effective at low concentrations and less toxic to mammals.
The debate erupted in May of 2017, when Christian Krupke, an entomologist at Purdue University, and colleagues published a study comparing cornfields planted with untreated seeds to fields with neonic-coated seeds at three locations in Indiana over three growing seasons. They found no evidence of yield benefits from the seed treatments. Although the study was one of the first to show ineffectiveness in corn, contradicting previous research, the paper’s authors write that their study, along with studies on other crops, “suggest that the current use levels of insecticidal seed treatments in North American row crops are likely to far exceed the demonstrable need.”
Critics of the study, many tied to the agricultural industry, point to a 2014 report by AgInfomatics, an agricultural consulting firm, which reviewed over 1,500 field studies and found yield benefits for all crops studied, ranging from 3.6 percent for soybeans to 71.3 percent for potatoes (corn clocked in around 17 percent). They also surveyed farmers and found other financial benefits, such as not needing to spend the time and money that goes into scouting fields and spraying insecticides.
“The simple answer is farmers would not continue to buy them if they did not produce value,” explained Peter Nowak, a University of Wisconsin environmental studies professor associated with AgInformatics. Nowak sees insecticidal seed treatments as part of an emerging technological trend in agriculture where the seed, and treatments, is the future of row crop production.
But John Tooker, a Penn State University entomologist who studies neonicotinoids, thinks the AgInformatics study should be taken with a grain of salt because it was funded by pesticide manufacturers and was not published in a peer-reviewed journal.
“Time and time again, research in various sectors (biomedical, nutritional science, agricultural science, etc.) has shown that reports funded by the industry tend to support the industry perspective,” he told The Progressive Farmer. “This report certainly does that, and it is good to recognize that potential lack of impartiality.”
In response, Nowak said he’s tired of the “cheap shots” taken at the AgInformatics report.
“Show me something more robust and valid — there is none,” he said. “We have not (over) generalized from a single study as has been the case with the critics. Instead, we pointed out important nuances associated with neonicotinoid efficacy found across hundreds of studies.”
Researchers agree that neonicotinoid effectiveness depends on the crop and where it’s grown. For example, recent research has shown that neonic seed treatments have yield benefits for soybeans in the South, but convincing evidence to support their widespread use in northern states, where most US soybeans are grown, is harder to come by. This is likely because the warmer southern climate is more hospitable to early-season soil pests.
Krupke and Tooker are part of a group of entomologists in the Midwest that argues soybean seed treatments are largely unnecessary in their region. They say the plants are able to withstand early season pest pressure without inflicting economic losses. By the time more serious pests come along, such as the soybean aphid, the insecticidal effects of seed treatments have already worn off. Instead, they recommend an integrated approach that includes “rotating crops, conserving natural enemies, using soybean varieties with resistance to pests (soybean aphid) or disease (bean pod mottle virus), and scouting and applying insecticides at established thresholds.”
However, there are certain high-risk situations, such as food-grade soybeans or fields transitioning from pasture to soybeans, in which entomologists agree that treatments can be useful.
“I don’t recommend [neonicotinoid seed treatments], particularly for soybean, unless there is a demonstrated need,” said Iowa State entomologist Erin Hodgson.
A 2014 review by the EPA agreed, concluding that the seed treatments “provide negligible overall benefits to soybean production in most situations.” But USDA chief economist Robert Johansson wrote a stern letter to the EPA in response, stating that the USDA disagreed with the assessment, and criticizing the agency for adding “an additional and unnecessary burden” on farmers.
Farmers weigh in
The debate over neonicotinoid effectiveness highlights some of the unique challenges farmers face, and the difficult decision-making process inherent in the low profit margin, high-stakes world of modern agriculture.
Terry Daynard, an Ontario grain farmer and former University of Guelph crop science professor, sees neonic seed treatments as a low-cost insurance policy against a small but potentially devastating risk.
“I once lost several thousand dollars worth of corn when an unexpected soil insect pest destroyed much of the center of a field,” he said. “That’s the time when I decided to purchase all neonic-treated seed in the future.”
Maria Trainer, Managing Director of Science and Regulatory Affairs for Chemistry at CropLife Canada, which represents plant biotechnology companies, agrees.
“The unpredictable nature of soil insects is why growers have adopted treated seeds so widely,” she said. “Many of the pests that they protect against are found in the soil and cannot be treated with a spray because by the time a farmer identifies the pest problem, the crop is lost.”
Van Larson, an independent agronomist at Agronomy Services Plus in southern Minnesota, is skeptical of the entomologists’ research showing neonics don’t increase soybean yields, partly due to past experiences. In 2001, university researchers told him not to spray soybeans because it could harm beneficial insects. Then the aphids came.
“It was massive amounts of money that we lost listening to that suggestion,” he said.
Krupke, the Purdue entomologist, argues that many farmers aren’t aware that they’re buying neonic-treated seeds and are paying extra for it.
“Even if they do, they often have no other choice,” he said. “The market provides no easy option to buy untreated seed, especially in corn, so there is no basis to price compare.”
Farmers, however, don’t seem to be clamoring for untreated seeds at the moment.
“The reality is that farmers want this product,” Kevin Cavanaugh, director of research for Beck’s Hybrids, a seed company, told the Indiana Prairie Farmer. “I’ve yet to have the first farmer tell me that he doesn’t want neonic seed treatment on his seed.”
Paul McDivitt is a science and environmental writer based in St. Paul, Minnesota. He has a Master’s in environmental journalism from the University of Colorado. Follow him on X @PaulMcDivitt
A version of this article previously appeared on the GLP on October 3, 2017.
We don’t always know why anti-depressants and obesity drugs work, but that shouldn’t discourage patients from taking medications that can improve their lives. Gene-editing could boost animal welfare and expand access to nutritious food around the world. Is it therefore immoral to oppose the use of CRISPR-edited animals in agriculture? A slew of recent headlines has claimed that America’s water supply contains dangerous chemicals. There is no science behind that alarming speculation.
Join hosts Dr. Liza Dunn and GLP contributor Cameron English on episode 238 of Science Facts and Fallacies as they break down these latest news stories:
Anti-depressants and recently approved weight-loss drugs like Ozempic have something in common: they seem to work very well for many patients, who experience few negative side effects and manage to take control of their mental health or weight. This message is often obscured because we don’t always know why these drugs work for some people but not others. That uncertainty drives fearful media coverage and engenders skepticism among the public. Can the available evidence help clarify how these medications work?
As developing nations grow wealthier, demand for meat and other animal products will increase. Using gene editing technologies like CRISPR, scientists can breed animals that are disease resistant, and farmers who lose fewer cows, pigs and chickens to costly diseases can produce much more food than was previously possible. These observations prompt an intriguing question: is it immoral to oppose animal gene editing if it gives more people access to a sustainable food supply? One scientist says yes. Let’s take a closer look at his argument.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
The US has widespread access to safe, clean drinking water—which is a key “measure of progress against poverty, disease, and death,” according to the Centers for Disease Control and Prevention (CDC). Unfortunately, recent news coverage has misused research conducted by the Environmental Protection Agency (EPA) to allege that our water supply contains harmful chemicals. There is zero evidence to justify such an alarming claim.
Dr. Liza Dunn is a medical toxicologist and the medical affairs lead at Bayer Crop Science. Follow her on X @DrLizaMD
The African continent has been home to genetically modified (GM) crops for more than 26 years, beginning in 1996 when insect-resistant GM maize was commercialized in South Africa. Since then, several GM crops have been developed on the continent, including cotton, cowpea, soybean, rice, and Argentine canola (Brassica napus). These crops have been modified to increase agricultural productivity by delivering desirable traits, like pest resistance, herbicide tolerance, and improved nutritional qualities. Such crops are critical for food security, as 7 of the10 countries most vulnerable to climate change are on the African continent. Drought is a significant challenge for agriculture, and 95% of African agriculture is rainfed, meaning that to improve food and nutrition security for a fast-growing population — expected to grow to 2.5 billion from 1.4 billion people by 2050 — the continent needs to adapt agriculture to climate change while also increasing food production.
Despite the challenges posed by climate change and increasing food demand, only seven countries on the African continent grow GM crops (Figure 2). Two other countries (Egypt and Burkina Faso) historically commercialized GM crops but later imposed bans on the technology, and five other countries have had GM field trials for many years, but have yet to bring a single crop to market (Figure 1).
The lack of GM crop commercialization, even after years of field trials, comes at the expense of millions of dollars of investment in research and the continued impact of the challenges that available GM crops seek to address. In Uganda, losses due to banana bacterial wilt were estimated at $34 million in 2005 and $76 million in 2006. Cassava mosaic disease causes up to 70% yield losses, while cassava brown streak disease causes up to 100% yield losses in susceptible varieties. Scientists in Uganda have developed crop varieties resistant to all these diseases, but farmers have not benefited because even after 15 years of confined field trials, the country has not commercialized any GM crops.
So why have some countries, like Uganda, not approved GM crops while neighboring countries have? To understand why, we compare media coverage, regulatory frameworks, and public opinion related to GM crops across three categories of countries: those with at least one commercialized GM crop, those with confined field trials of GM crops but none commercialized, and those with GM crops that were commercialized and later banned (Figure 2). We also interviewed stakeholders in Nigeria, Uganda, Ghana, Malawi, and representatives from international advocacy groups that work in Africa, and their opinions are included throughout this article. Our work highlights the importance of passing biosafety laws; raising awareness among farmers, especially farmer-based organizations; running consistent and accurate programming in outlets people trust; and making the economic case for GM crops. In addition, efforts like matching R&D funding — and advocating for a science-based branch of government or semi-autonomous agency to approve commercialization of GM crops (rather than a minister) — could help increase GM crop commercialization, as several studies and advocacy groups we spoke with attested.
Biosafety laws are generally a prerequisite for GM crop field trials or commercialization
Biosafety laws, which define legal and institutional frameworks for governing GM organisms, are generally a prerequisite for governments to approve GM crops for commercial cultivation. Of the nine African countries that have or previously had a commercialized GM crop (Figure 2), seven had a biosafety law in place before approving their first GM crop (excluding South Africa and Egypt).
However, many countries with biosafety laws have not approved a GM crop for commercialization, or even conducted confined field trials of GM crops, indicating that biosafety frameworks alone are not sufficient to spur GM approval and adoption (Figure 3). And while having a biosafety law is generally a prerequisite for the commercialization of a first GM crop, it doesn’t mean that the time to the first commercialization will be short. Some countries have biosafety laws in place and have had many years of GM crop field trials, but still have not commercialized a GM crop — in Cameroon, Mozambique, and Ghana, 20, 16, and 11 years, respectively, have passed since they implemented biosafety laws (Appendix: Figure 6). In addition, nine more countries have biosafety laws but still haven’t had confined field trials of GM crops.
Donors and development partners should not interpret the absence of a biosafety law as an inability to conduct research when they are considering which projects to fund. Both Uganda and Tanzania have been conducting GM crop research in confined field trials for years (Figure 1) without biosafety laws, instead using existing legal provisions for the approval of research. Further, the presence of research products that solve pressing agricultural challenges may contribute to the faster development of biosafety laws and eventual commercialization and uptake by farmers — as observed with Bt cowpea in Nigeria (Bt crops contain a gene from the bacterium Bacillus thurigiensis that produces a protein toxic to some insects when eaten). In addition, two countries that commercialized a GM crop did not have a biosafety law in place first: South Africa commercialized a GM crop in 1996 before putting a biosafety law in place in 1997, and Egypt commercialized a GM crop by ministerial decree in 2008 but later banned the growing of GM crops in 2012, citing the absence of a biosafety law as one of the reasons for the ban.
Development of clear, science-based biosafety laws is clearly important to commercializing GM crops, but so too are other regulations, field trials, government capacity to review GM traits, and a wide range of factors from seed prices to extension workers.
All the stakeholders we interviewed believed that having a biosafety law in place was a prerequisite for GM crop commercialization, but that even in the presence of biosafety laws, hindrances, like strict liability, must also be addressed before commercialization can take place. Strict liability means that scientists can be held accountable for impacts associated with the GM crops they develop, even when they were only involved in the research stage of the crop. So far there has been no legal case against GM crop researchers based on strict liability clauses, but stakeholders indicate that the overall feeling is that it is not worth the risk. The stakeholders also stressed that advocacy must be sustained even after a biosafety law is in place to build public support for research in GM crops, and later to enable successful adoption once the GM crops are commercialized.
Looking at the nature of the biosafety law, designating the National Biosafety Management Agency as the competent authority in Nigeria may have been vital to success, as opposed to having a government ministry handle approval. This is because the agency is led by experienced scientists, whereas the ministry heads are political leaders that act more based on public opinion rather than scientists’ input. The same strategy was adopted by Kenya when it transferred the approval authority from the government ministry to a semi-autonomous agency, which may have led to the successful commercialization of Bt cotton.
The case of Malawi is different, as its approvals are handled by the Ministry of Environment, which is advised by a technical team. So far, one GM crop has been commercialized through this system and a second is on its way to commercialization. This approach is unreliable, however, as it is highly dependent on the political will of the minister, which could easily change with the appointment of an anti-GM minister. However, the minister’s technical team can use empirical evidence to inform the minister’s decision-making.
Internationally, the stakeholders interviewed believed that trade partners, like the European Union, wield a lot of influence over the political decisions in Africa, and are partly responsible for the low number of commercialized GM crops. To address this, African representatives can participate in international forums, like the Convention on Biological Diversity, to influence the opinions and political positions of such trade partners.
Awareness of GM crops is low in countries that haven’t commercialized them
Public and particularly farmer awareness and support of GM crops is typically considered to be important for ultimately driving government approval and widespread farmer adoption. Among the groups of respondents surveyed, farmers were the least aware of GM crops; however, farmers with basic knowledge about GM crops agreed less with risks of GM crops — and more with some benefits — than farmers without basic knowledge, supporting the idea that education can shift perceptions. Farmer knowledge about GM crops, and belief that the benefits outweigh the risks, is certainly important for GM crop adoption, and may be for approval as well.
Levels of awareness in countries that haven’t commercialized GM crops are often low, according to our analysis of public opinion surveys conducted in countries with GM crop trials, biosafety laws, or approvals (Appendix: Figure 7). In surveys of farmers, awareness of GM crops increased in Ghana after the 2012 passage of a biosafety law, and was lowest in Tanzania, a country that has still not passed a biosafety law. Knowledge of GM crops was positively correlated with approval, as was knowledge of GM policies. However, awareness among farmers in Nigeria — after a biosafety law was passed and a first GM crop was commercialized — was similar to awareness in Ghana right after the country passed a biosafety law. A country’s progress on passage of a biosafety law, confined field trials, or commercialization of a GM crop could contribute to farmer awareness of GM crops through increased media coverage of the topics or increased outreach.
Awareness is especially low among farmers, a group that is particularly important for GM crop adoption, and potentially for policy progress on passage of a biosafety law and eventual commercialization. Among studies that included multiple categories of respondents, farmers consistently had the lowest level of awareness, compared to other groups. In Tanzania in 2015, farmers had by far the lowest awareness of GM crops (24%), while regulatory authorities had the highest (88%), and academics and media fell in the middle around 50%. Similarly, consumers in Kenya in 2011 had much higher awareness of GM crops than farmers.
Low farmer awareness may be related to education or income. One study of farmers found that both more years of education and use of a credible seed source were correlated with higher awareness of GM crops. In groups more highly educated than farmers, awareness is usually higher. Studies of university students in Ghana and Ethiopia show high levels of awareness and optimism about GM crops — 95% aware and 72% optimistic, respectively. Studies in Ghana of households with formal education, higher-income urban populations, and supermarket shoppers — who are relatively high-income — all found that more than 90% of respondents had heard or read something about GM foods.
In addition, even though many of the surveys summarized in Figure 7 (Appendix) indicate overall positive attitudes toward GM crops or food, even supportive farmers and consumers are concerned about a variety of risks. For example, one study showed that of respondents who had heard of or read about GM crops, 75% agreed that GM crops provide improved crop varieties, but 69% agreed that a risk of GM crops is potential market failure. In another study, 80% of respondents agreed that GM crops can improve food security, but 71% agreed that an associated risk is the high cost of seeds. These results indicate that there is substantial overlap between respondents who agree with both benefits and risks.
Some of these concerns reflect low levels of understanding. For example, in an Eswatini supermarket study, researchers started by asking respondents whether they were aware of GMOs, then asked additional questions to determine which of the people who said they were aware of GMOs had objective knowledge — they determined that 32% had objective knowledge of GMOs and 68% did not. Greater and improved media coverage of GM could help improve awareness among farmers and other groups, shift perceptions of risks and benefits, and ultimately build a more science-based regulatory framework for GM crops.
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.
Media coverage of GM crops is similar across six countries regardless of commercialization status
Media coverage of GM crops influences policy-making and the eventual adoption of GM crops, and studies have shown that media is a key tool for advocacy. We examined how the focus of news articles about GM crops varied across countries that have and haven’t commercialized GM crops (Figure 5). We analyzed the number of times keywords appeared from three different categories: economics, food security, and climate change. Figure 5 below shows the percentage of the keywords identified that fall into each of the three categories. Though media coverage leaned heavily toward economics-related keywords, like trade and price, there was no large difference in focus across groups of countries, suggesting that topical focus of media is not an important determinant of GM approval.
The focus of media coverage on economics across all the countries studied suggests that governments, the public, and farmers all have economic growth and poverty reduction as a high priority. This is not to suggest that African countries don’t prioritize or take action on climate change. Kenya, Zambia, and Ethiopia generate more than 50% of their power from renewables — compared with just 20% in the United States — despite the continent’s minuscule historical contributions to global greenhouse gas emissions and need to increase power generation to reduce poverty.
Nevertheless, other elements of media coverage may play a role, like sentiment (e.g., positive, negative, or neutral), the direction of communication (top-down vs bottom-up), the types of voices included in media, and frequency and timing of coverage.
The stakeholders interviewed for this study observed that GM advocacy through the media (for example, opinion articles) could improve by coordination between different advocacy groups, and by consistent and sustained messaging. Despite having several advocacy groups within each country, media advocacy is often not coordinated with a common message about the benefits of GM crops. Current funding models for media advocacy support training workshops for various groups, including journalists, farmers, and policymakers. But when media publications about GM are concentrated following such workshops, it means that there is a long wait period between one campaign and the next, which makes it easy for the public to forget the message. In addition to coordinating messaging and campaigns between advocacy groups, funding for sustained communication campaigns is preferable as opposed to having reactive or isolated communication efforts. Sustained communication activity could include such activities as a TV show, radio talk show, or newspaper column that runs weekly for six months or more. International advocacy groups have tried to achieve sustained activity with constant blogs and weekly newsletters, but this approach is lacking for many African-based groups.
In addition to coordination of messaging between different advocacy groups, coordination of technical terminology and language is crucial. Though our analysis for this study focused on English-language media, in many African countries less than half the population speaks English, meaning that communication via radio and TV stations in local languages is crucial. For radio and TV communication about GM crops to be effective in many different languages and localities, communicators must agree on terminology to use in each language for GMOs, genome editing, etc. Coordination of GM crop terminology across languages is another opportunity for advocacy groups to work together.
Another observation from one of the stakeholders was an alienation of African media with references to Western science and utilization of examples that do not resonate with the African context. It is important to cultivate a culture of quoting African scientists and African examples while communicating about GM crops to make the message more relevant to African readers.
Lastly, some stakeholders recommended that communication be carried out both bottom-up and top-down, so not only scientists are communicating but also farmers and the general public. Feedback from farmers and the general public could help ensure that scientists and advocates specifically address the concerns of end-users, which could help create GM crops that solve pressing agricultural challenges and generate enthusiasm and public support for GM crop research. An example of such an approach is a community-based meeting with the local people directly interacting with the researchers working on GM crops. As evidenced by the case of Bt cowpea in Nigeria, relevant GM crops that solve current agricultural challenges may result in a push for commercialization of those crops.
Some studies looked at factors influencing respondents’ perception of GM crops. One study found that farmers who got information from extension officers, agro-input dealers, and friends or relatives had a more accurate understanding; and farmers who got information from mass media, particularly radio and television, had a less accurate understanding. And no wonder. In a study of several groups, a whopping 77% of respondents in media agreed that GM food can damage people’s health, as opposed to a substantially lower 51–54% of regulatory authorities, academicians and farmers.
Given the correlation of accurate understanding of GM crops with getting information from sources outside mass media, the authors recommend that the National Biosafety Authority strengthen education on GM crops through media, particularly to Farmer Based Organizations, and especially through radio and television. Media coverage is powerful because farmers who got information from mass media were more confident in their knowledge of GM crops than those who got information from other sources. Surprisingly, membership in a farmer-based organization was correlated with lower awareness of GM crops, supporting the recommendation to specifically conduct outreach to these groups.
Provide creative forms of R&D funding
Many stakeholders identified the low levels of government investment in GM crop research as one of the reasons why governments in some countries were reluctant to pass biosafety laws and approve GM crops. The gap created by limited local investment in GM research is filled by donor funds, which has fueled the perception by a section of the public and anti-GM groups that GM crops are a foreign agenda.
To encourage more local investment in GM research that could increase political will, research funders should adopt a matching fund model. This is where governments are required to commit a set amount of resources to a research project that is then matched by donor funds. A matching funding model has been adopted for other development projects, like those in infrastructure (construction of the Kampala Northern Bypass) and in agriculture (Chase Hunger and Poverty project under the Development Initiative for Northern Uganda program). Hence, it would not be a new concept to governments if it were proposed for GM crop research. With a significant amount of public funds invested in GM research and in advocacy for the technology, the government may be more motivated to reap the benefits of that investment by putting in place conducive conditions for the commercialization of GM crops.
Next steps
Media coverage of GM crops across all the countries we studied focused more on economics than on food security or climate change. Both our research on existing biosafety laws and the stakeholder interviews we conducted indicate that passing biosafety laws is an important precondition for commercialization of a GM crop. In addition, our stakeholder interviews suggest that having a science-based branch of government or semi-autonomous agency in charge of GM crop approval makes approval more likely. Public opinion surveys generally indicate positive perceptions of GM crops, both in countries that have commercialized crops and in those that don’t, suggesting that public opinion is not a central barrier to GM crop commercialization; however, common concerns across countries are environmental impacts, food safety, and human health, suggesting that public engagement should focus on these topics.
Being able to define the economic benefits of technology — in this case, agricultural biotechnologies — is key to securing political will and public support in developing countries like those in Africa. The discussion of biotechnology benefits within the framing of the bioeconomy has gained traction in areas like the United States, China, India, and the European Union, all of which have published bioeconomy strategies.
Methods
We searched AllAfrica.com to find the websites they obtain their news articles from for the countries we were analyzing. We supplemented this list with websites used by Cision Media Insights for the same countries — as highlighted in the supplemental section of the publication on the state of the GMO debate by Sarah Evanega and colleagues.
We then searched those news websites using GM crop relevant keywords. We identified these keywords by reading a sample of news articles about GM crops and searching for the various technical and non-technical words that people use to refer to genetically modified crops (Appendix: Media analysis search terms). This was followed by web scraping the title, link, date of publication, and text of the articles into .csv files using aGoogle Chrome web scraper extension. After compilation of the scraped text, articles for which the scraping tool missed the body of the article, the scraped article title was used to retrieve the full text.
The articles were further organized according to the year of publication and screened to consider the years 2016–2019. This was because commercialization of GM crops in the countries we studied was after 2019 (Kenya, Nigeria, Malawi). Therefore, we looked at media coverage in the years leading up to commercialization. For purposes of comparison, we analyzed the same years for the countries that have not yet commercialized any GM crop (Uganda and Tanzania) — and for Egypt that commercialized and later banned the growing of GM crops. Through this process, we gathered a final sample of 1,004 articles for further analysis.
We searched each article for three categories of keywords: economics, food security, and climate change. To choose keywords to include in each category, we picked a sub-sample of the articles (14 out of 1,004 articles), read through them, and identified terms associated with the benefits of GM crops. This was followed by screening the generated list of words to remove those that were ambiguous (i.e., could be associated with more than one category of keywords). Following that process, we generated a list of keywords under each of the categories (Appendix: Media analysis keywords), after which the articles were searched for the number of times they featured the keywords from each category. To ensure that the difference in the number of keywords between categories did not affect the observed percentage focus, we listed the same number of keywords for all three categories.
After identifying the number of keywords in each category (absolute score), each number was divided by the total number of words in each article to normalize for the length of the article (relative score). We calculated the average relative score for each category for all articles analyzed for each year. With the summation of all the average relative scores for a given year taken to be equivalent to 100%, we calculated the percentage focus for each keyword category in a year and plotted these on a stacked bar graph.
We identified stakeholders from Nigeria, Uganda, Ghana, Malawi, and representatives from international advocacy groups that work in Africa.
Ian Peter Busuulwa is the Coordinator of the Bioeconomy Coalition of Africa – a platform of stakeholders working together to grow the bioeconomy in Africa.
Emma Kovak is a Food and Agriculture Analyst at Breakthrough. Find Emma on X @EmmaKovak
Guido Núñez-Mujica is Senior Data Scientist at Breakthrough. Find Guido on X @osguido
A version of this article was originally posted at the Breakthrough Institute and is reposted here with permission. Any reposting should credit the GLP and original article. Find the Breakthrough Institute on X @TheBTI