Children aren’t biologically programmed to be picky eaters, so why do we feed them sugary and ‘ultra-processed’ foods?

In countries such as the U.S. and Canada, the term “children’s food” conjures images of milk, sugary cereals, yogurt tubes, and chicken fingers. Advertisers, restaurants, and media market these items as kid-friendly fare that’s convenient, palatable, fun, and supposedly “healthier” than adult foods.

The rationale for feeding children these foods is their need for extra nutrients and because, in some cultures, kids are thought to be “picky eaters.” But how much of this is rooted in biological reality, and how much is a product of cultural notions?

In my new book, Small Bites: Biocultural Dimensions of Children’s Food and Nutrition, I explore kids’ diets through an evolutionary lens and anthropological research in several countries. I sift through the differences between biological needs and social constructs, exploding myths about children’s food and eating. I demonstrate how the category of kids’ food is an invention of the modern food industry that began in the U.S. and is now pervasive around the world. In addition, I describe cross-cultural practices that may offer more nutritious, enjoyable, and equitable models for feeding children.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Do children require special diets?

Not long after vitamins were first discovered in the 1910s, people became gripped with “vitamania.” Food and drug producers, medical professionals, and some media outlets convinced many parents that their kids weren’t getting enough vitamins from their regular diets, so they needed to give them supplements like cod liver oil and yeast cakes.

Youngster were given a spoonful of cod liver oil for their health. Credit: Getty

Today the fear of children not getting adequate nutrients is encapsulated in new, ultraprocessed products. Enter “toddler milk” or “growing-up milk.” This powdered product is designed for children 1–3 years of age and is marketed to promote healthy brain growth because it contains DHA (docosahexaenoic acid, a type of omega-3 fat). In the U.S. from 2006 to 2015, the amount spent on advertising toddler milk saw a fourfold increase while sales multiplied 2.6 times.

But paradoxically, toddler milk may do more harm than good. It undermines breastfeeding for up to two years (a practice that is recommended by the World Health Organization), it’s expensive, and it contains added sugar, possibly distracting toddlers from eating nutritious foods. Plus, there is no evidence that toddler milks are more nutritious than regular milk or other healthy foods.

In the U.S., high demand and supply chain interruptions have caused a shortage in baby formula and “toddler milk,” a highly processed, sugary drink that health experts do not recommend.
Credit: Jim Watson via AFP andGetty Images

From conception to adolescence, children do require high-quality food to maintain their nutritional well-being. During the first year of life, babies triple their weight and increase their length by more than 50 percent. The next highest period of growth velocity occurs during adolescence. The brain grows even faster, and by 7 to 11 years of age, the brain has almost completed its volumetric growth.

Rapid growth requires fuel (energy) from protein, carbohydrates, and fat, as well as vitamins and minerals. Calcium, for example, is needed in abundance (relative to body weight) during childhood and adolescence. This is why milk, which is rich in calcium along with many other nutrients, is promoted for children, though it is not the only way to obtain calcium for optimal growth.

So, while children have special nutritional requirements that change with each stage of development, they do not need special foods. In fact, such foods may be hurting kids. But even if kids don’t nutritionally need their own menu, do they just naturally want certain foods like buttered noodles and cheese sticks?

Is picky eating biological or cultural?

From 6 to 12 months of age, when children are completely dependent on their caregivers, they don’t discriminate much in their eating. From 13 months to 6 years, however, they become quite discerning. The fear of new foods (or food neophobia) may be a built-in survival mechanism while they are discovering what is edible and inedible.

Picky eating is distinct from food neophobia. Picky eating includes the rejection of new foods but goes beyond that to the rejection of large categories of foods, even familiar ones at different times, based on characteristics like color or texture. Picky eaters usually eat inadequate amounts and types of foods. This behavior may continue into adolescence and even adulthood. I argue that though neophobia in early childhood is universal, picky eating is culturally constructed.

While no studies compare the prevalence of picky eating in childhood worldwide, there is some evidence to demonstrate that picky eating may be a “culture-bound syndrome” specific to American cultural norms, though certainly not limited to the United States.

In China, for example, finicky eating is a recent and variable phenomenon. Traditionally, children ate what their parents ate. The term for children’s food in China, ertong shipin, did not appear in the dictionary until 1979, and it was not until the rising affluence of the 1980s that children’s food became part of popular Chinese culture. Even so, parents in China report that picky eating is more common in urban and suburban areas than in rural places.

In the Muslim Quarter of Xi’an, China, a mother gives a child a lamb stew. Credit: Zhang Peng via LightRocket and Getty Images

In Nepal, I researched children’s food among families in urban Kathmandu and those living in a rural village in the Himalayas. I collected data from parents about the diets of their children under 5 years of age and found that children mostly ate the same food as their older family members. But in Kathmandu, unlike in rural villages, children were regularly fed commercial products such as Nestlé’s Cerelac (instant cereal) and sweet, packaged biscuits.

One place where picky eating is by and large not condoned is France. Children in France are expected to try new foods. They eat mostly what adults do and mostly like it too.

I studied elementary school lunch programs in Paris, interviewing school administrators, nutritionists, and parents, plus sampling lunches in a selection of schools. Children in France have very structured mealtimes: breakfast, lunch, goûter (afternoon snack), and supper. This structure is also reflected in school meals, where children all consume the same lunch, consisting of an entrée (usually a vegetable), a main course—meat/fish and vegetables, or a vegetarian dish—followed by cheese/yogurt/fruit and accompanied by bread and water. Meals are subsidized and geared to income, and menus indicate which foods are organic and/or locally procured.

France is quite serious about teaching children food culture. Each fall, the country holds la Semaine du Goût, during which schoolchildren spend the week visiting food artisans and cooking and tasting different foods from local regions to learn to appreciate French cuisine.

If studies show, then, that children aren’t biologically driven to be picky eaters, and they don’t require special meals for their nutritional well-being, why is the idea that children require their own category of food so pervasive?

The industrial food system and kids’ eating habits

Before food was industrialized, children’s food did not exist as a distinct category, apart from weaning foods like mashed carrots. Then the industrialization of food began around 1870 in the United States and intensified after 1945.

It was initiated when companies began patenting a process of milling that produced whiter, longer-lasting, and less nutritious flour. Subsequently, corporations like Coca-Cola and Kellogg’s began branding foods. Whole foods were simplified and became more processed through the addition of salt, sugar, fat, and chemical additives to extend their shelf lives and thus increase profit.

As families became smaller and the focus on children intensified in the 20th century during the “century of the child,” children became lucrative for the food industry. This is because older children had their own money to spend on food, and children of various ages increasingly began to influence their parents’ purchases. As a result of these and other factors, children and adolescents in the U.S. now get 67 percent of their calories from ultraprocessed foods such as frozen pizza, industrial bread, and candy.

One of the most common—and harmful—ingredients in children’s foods is sugar. Epidemiological evidence links high sugar consumption with numerous health issues, such as heart disease, obesity, and diabetes.

There is a growing recognition that advertising targeting children is contributing to malnutrition in kids.

Children are particularly vulnerable to the sugar-saturated industrial food system due to their fondness for sweet tastes. Studies of babies indicate they always respond more positively to sweet foods, and children have a higher preference for sweet tastes that remains elevated through childhood and declines during mid-adolescence to adulthood.

In this study picky eaters were taller and heavier than their less discerning counterparts. Credit: Hsun-Chin Chao

There is likely an evolutionary advantage to preferring tastes that signal nontoxic foods such as fruits, especially at a stage in life when one is tasting many foods for the first time. In addition, sweeter foods are high in energy, so humans may have evolved cravings for sweet things that correspond to higher energy needs during growth and development.

While the heightened desire for sweetness during childhood may be biologically driven, people are not programmed to eat sugar in the quantities they often do today. Prior to its mass production beginning in the 19th century, sugar was not available or affordable for most people. Since the development of high-fructose corn syrup, sweeteners have become cheaper and more ubiquitous than ever.

Though the accusation that high fructose corn syrup is the causal factor for the obesity crisis can be challenged. Credit: Bray GA, Nielsen SJ, Popkin BM

Two ultraprocessed and often sugary foods that have become sine qua non fare for children are breakfast cereals and snack foods. Kids can prepare and eat both relatively independently, since neither requires a stove. And young children can easily pack and open snack foods like yogurt tubes, fruit rollups, and grain bars, making them convenient meals on the go.

Companies market cereal and snack foods to kids using cartoon and TV or movie characters, a form of “eatertainment” that can help parents who may be struggling with work/home balance to prepare convenient foods for their picky eaters. In addition, advertisers endow these cereals and snacks with “health halos” because they are made from ingredients such as dairy, fruit, or whole grains, or are fortified with vitamins and minerals.

Despite these claims, studies have consistently shown that diets high in ultraprocessed foods contribute to obesity and cardiometabolic risk factors in children, plus increase the risk for cardiovascular diseases and cancers in adults.

There is a growing recognition that advertising targeting children is contributing to malnutrition in kids. As a result, Sweden and Quebec have banned all advertising to children, while the U.K. eliminated the advertising of unhealthy foods online and before 9 p.m. on television. But in the rest of Canada and in the United States, the monitoring of children’s food advertising is voluntary and subject to guidelines only.

Students eat lunch in a cafeteria in Lyon, France, where the city council announced in 2020 that it aimed to serve 100 percent organic food in schools. Credit: Jean-Philippe Ksiazek via AFP and Getty Images

Models of feeding kids in different cultures

Many nations prioritize feeding children in ways that not only attend to their nutritional needs but also consider food equity. This is usually accomplished through school meal programs. An estimated 388 million children in low-, middle-, and high-income countries worldwide receive school meals.

These programs are paramount for many low-income and working families who rely on the meals to reduce household labor, particularly for women, and subsidize food costs. The necessity of these programs has been highlighted during the COVID-19 pandemic, when many schools closed, and children’s nutritional well-being suffered greatly. However, the quality of these programs varies widely, and some have been criticized for their nutritional deficits and for stigmatizing children who receive subsidized meals or who cannot pay for their food.

Exemplary school meal programs—such as those in Brazil, Colombia, Finland, France, Italy, and Japan—provide affordable lunches for children that not only offer pleasurable and nutritious dining experiences but also teach children about national gastronomic culture and support local agriculture.

In Brazil, for example, school meals are entirely funded by the government, menus are developed by nutritionists, and schools must buy at least 30 percent of their produce from small-scale farms, preferably locally. In Finland, all children attending pre-primary to secondary education (around ages 6 to 18) are entitled to free school meals.

As these programs and numerous studies demonstrate, children’s food doesn’t need to be special or different from adult food. But it must be prioritized with special care in order to sustainably and healthily nourish children and future generations.

Tina Moffat is an associate professor and chair of the department of anthropology at McMaster University in Canada who studies the social and cultural determinants of maternal-child health and nutrition. She is a co-editor of the edited volume Human Diet and Nutrition in Biocultural Perspective: Past Meets Present (with Tracy Prowse). Moffat is the past president of the Canadian Association for Biological Anthropology. Follow her on Twitter @TinaMoffat3

A version of this article was posted at Sapiens and is used here with permission. Check out Sapiens on Twitter @SAPIENS_org

Viewpoint: As Africa faces climate change and farming disruptions, biotechnology-driven innovation offers hope. Why is it not more widely adopted?

On June 9th 2022, persistent rainfall swept through Lagos, Nigeria’s most populous and smallest state. The downpour led to loss of lives, destruction of property and deterioration of an already dilapidated infrastructure. 

In the past decade, natural disasters such as flooding and drought, mostly driven by adverse climate change effects, have increased across the continent. For instance, the current drought crisis in East Africa has withered crops and left lands unproductive for millions of farmers who depend on them for sustenance and consumers who rely on them for food. This leaves more than 10 million people highly vulnerable to starvation and exacerbates malnutrition.

Unfortunately, there has been a low rate of adoption of modern day agricultural technologies by African farmers, a key way to address poor crop output. The continent remains the last frontier to adopt these techniques that have increased crop yields without increasing per acre usage of agro-chemicals in the US, Canada , Brazil, India, Argentina, Australia and other countries. What’s slowing the embrace?

The tepid reception is linked to crop biotechnology rejectionist campaigns, fronted by local activists but financed and guided by environmental groups in western countries. Moreover, there has been an inadequate investment of resources, both academic and commercial, into research targeting crops mostly consumed by a wide section of Africans. These crops, christened “orphan crops” hold a lot of promise as they could be consumed even by a global audience, as they have naturally evolved a resistance to the deleterious effects of local climate change. Some of these orphan crops could substitute for imported conventional crops whose prices have shot up due to the Russian-Ukraine war. Before the war, the two warring countries produced about 28% of globally traded wheat, 29% of barley, 15% of corn and 75% of sunflower oil.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Africa’s dual problems: climate change and a population explosion

Across the world, climate change is reverberating. The average global temperature has risen by more than 0.9 degrees Celsius over the past century. That may seem like a small increase, but it’s much larger in many African countries. The impacts have been catastrophic in some cases.  Human displacement as a result of climate change has exceeded that due to conflicts. 

With climate-change-related problems come extreme weather conditions. This translates to shifts in weather conditions and extremes. Yesterday’s rare extremes are now commonplace and while tomorrow’s occurrences are even more extreme.

Credit: Economist

A fast-growing population exacerbates the crisis. According to recent projections from the United Nations, by the end of 2022, the world’s population will climb to eight billion people and outstrip 10 billion inhabitants by the end of the century. The most significant growth originates from sub-Saharan Africa and parts of South Asia. And while fertility rates are dropping in many countries — in prosperous South Korea it is below 1% in Africa it is 4.2% ( 2.1 is needed to replace people who die) in desperately poor Niger Republic, the fertility rate is almost 7%. 

African mothers will bear about half a billion children this decade. By 2050, a quarter of the world will be African. Such an astronomical increase begets a soaring demand for food, clothing, infrastructure, medicines, schools, hospitals and other infrastructure necessities. In the Horn of Africa, home to almost 150 million people, the exigency for food has skyrocketed over the last decade mainly due to conflicts, climate-change-induced crises, soaring demand for land, low adoption of modern-day agricultural practices and a host of other issues. This demand for land has affected protected areas for wildlife and aquaculture. (Britain’s Prince William was widely criticized when he raised concerns how Africa’s population growth is encroaching on ranches and wildlife parks.)

Feeding Africans

As the population of Africa, especially in urban areas, balloons, so does the demand for nutritious food. Across the world, almost a billion people go to bed hungry, and a quarter of those reside in Africa. The continent faces innumerable nutritional challenges. In 2016, the number of malnourished persons in Africa swelled from 101 million to 181 million persons. Meanwhile in 2020, the number of famished people resident in the continent climbed to 282 million from 236 million people. The reality is that Africans are hungry, and need nutritious food.

Regrettably, crop yields in Africa have not increased in step with birth rates. This has led to poor food harvests and an low farm revenues generated from the sale of these crops. Also, conflicts between farmers and cattle herders have exacerbated the lingering crises of poor harvests, especially across West and Central Africa. For instance, in the prairie region of Nigeria, deadly clashes have resulted in farmers being displaced and farmlands razed leading to a decline in agribusiness investments by investors. 

To meet these challenges, governments across the continent must invest significant financial and human resources. Although, Africa emits the lowest amount of greenhouse gases, it is likely to be hit the hardest because of the myriad food and health challenges, and the exploding population numbers

Uchechi Moses is a graduate student of the Department of Biotechnology, Osaka University, Osaka, Japan. Find Uchechi on Twitter @UchechiMoses_

Queen Elizabeth II officially died at 96 of ‘old age’. What does that mean?

unnamed file
Queen Elizabeth’s newly released death certificate contains just two curious words under her cause of death – old age.

We might talk about people dying of old age in everyday speech. But who actually dies of old age, medically speaking, in the 21st century?

Such a vague cause of death not only raises questions about how someone died, it can also be hard on family and loved ones left behind.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

The many ways people die

The leading causes of death in England and Wales are dementia and Alzheimer’s disease; heart disease; cerebrovascular diseases (such as stroke); cancer; and COVID. Other notable causes include chronic lower respiratory diseases (such as asthma); influenza; and pneumonia.

In fact, “old age” as a cause of death – alongside the vague description of “frailty” – is often categorised under “symptoms, signs, and ill-defined conditions”.

This latter category is in the top ten causes of death. But this currently trails well below COVID, and on average over a five year period, below influenza and pneumonia.

An interesting history

Old age, as a category for causing death, has a long history. It was a leading cause of death in the 19th century, alongside the vague description of “found dead”.

In the mid-19th century, registering someone’s death moved from clerical to secular, with the Births and Deaths Registration Act 1836 (UK).

The queen’s death certificate lists ‘old age’ as the cause of death. National Records of Scotland/AP/AAP

There was then the landmark publication, the Bertillon Classification of Causes of Death, written by French statistician and demographer Jacques Bertillon.

Canadian philosopher Ian Hacking wrote that dying of anything other than what was on the official list was “illegal, for example, to die of old age”.

We may say this is a bit hyperbolic. Surely, by the end of the 19th century, it was not illegal to die of old age?

What this suggests is that providing a precise cause of death is important because it’s a valuable tool for tracking mortality trends at different levels of the population.

Eventually, “old age” became a last resort phrase to describe an unknown cause of death. Or it became useful where a person may have died from a number of complications, but where it was not practical or ethical to order an autopsy to find the precise underlying cause of death.

There’s no closure

The other reason why “old age” has been seldom used as the cause of death in the 20th and 21st centuries was that it doesn’t provide any closure to families of the deceased.

Research shows families want information about how their loved one died, not only because it can be useful for managing their own health concerns, but also because it provides a resolution to their loved one’s death.

An unknown cause of death can exacerbate grief and trauma, particularly if the death was sudden or unexpected. Researchers have long argued families form continuing relationships with their loved one after they die. Ascertaining how they died is one part of how the family members left behind manage their grief and memorialise the deceased.

A good death

We may decide that asking for more information about how the queen died at the age of 96 is just macabre titillation. We may decide the royal family deserves privacy surrounding intimate details of the queen’s death.

However, a specific cause of death of someone who lived a privileged life and who died at an old age, for instance, can tell us much about how to lead a healthy life and plan for a good death.

Marc Trabsky is a Senior Research Fellow at La Trobe University and an Australian Research Council DECRA Fellow on a project titled ‘Socio-Legal Implications of Virtual Autopsies in Coronial Investigations’. 

A version of this article was originally posted at the Conversation and is reposted here with permission. The Conversation can be found on Twitter @ConversationUS

Viewpoint: Farmer protests in Europe challenges misguided restrictions on biotechnology innovation

(The Center Square) – Bill Wirtz

Over the summer, farmers in the Netherlands vehemently protested against the government’s new environmental rules. Over multiple weeks, thousands of farmers burned hay bales and blocked roads and food distribution centers in an effort to draw attention to new EU rules that would paralyze the sector.

Credit: ANP/AFP via Getty Images

The government in The Hague attempts to follow EU guidelines by slashing nitrogen emissions by 50% by 2030. Nitrous oxide and methane emissions are byproducts of livestock, for instance, when manure is deposed. The Netherlands – along with Denmark, Ireland and the Flanders region of Belgium – had exemptions on EU manure caps because of their small land areas, but that exemption is set to end for Dutch farmers. In practice, this means a considerable reduction in farm animals and putting numerous dairy farmers out of business.

Even with the prospect of the government buying them out, livestock farmers still aren’t on board with the EU’s plans. The prospect of a considerable reduction in farm animals would also endanger the country’s beloved traditional dairy products, such as Gouda and Edam cheeses. The farmers’ protests have now led to the resignation of Agriculture Minister Henk Staghouwer – who had been in office for less than a year – yet the government still remains steadfast in its decision to follow EU guidelines.

The European Union unveiled its “Farm to Fork” strategy at the beginning of the COVID-19 pandemic. The plan calls for a significant reduction in synthetic pesticides and fertilizers, as well as an increase in organic farming output. The European Commission, the executive arm of the EU in Brussels, structurally unveils legislative packages that make those plans a reality but have run into criticism from farmers and consumers. When USDA did an impact assessment on the effects of the strategy, it found that agricultural prices would soar between 20 and 53 percent. The EU itself did not present an impact assessment.

Credit: USDA
Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

With mounting criticism and overall food price inflation, the European Council (which represents EU member states) is now delaying the implementation of the cut in pesticides, particularly as countries in Central and Eastern Europe fear it would increase food prices further. “In countries such as Spain, if you impose a 50 per cent cut in the usage, you would have a major cut in output,” one diplomat told the Financial Times.

The farmer protests in the Netherlands are only the tip of the iceberg of the pandora’s box the EU has opened by meddling with Europe’s farming system. Environmentalism’s utopic and distorted view of agriculture clashes with the real needs of consumers. In fact, Europe’s solution of increasing organic farming is counterproductive to the goal of reducing carbon dioxide emissions. CO2 emissions will increase by up to a whopping 70 percent if organic farming becomes the norm, as researchers in the United Kingdom have shown. The reason is simple: organic agriculture needs more resources and more farmland to achieve the same output. This makes organic food not only worse for the environment but also more expensive for consumers.

Credit: L. Smith et. al.

For the United States, which has dabbled in similar attempts to make farming more “sustainable,” this is a cautionary tale. Europe is finding out the hard way how its ambitious policies are reducing purchasing power on a continent where citizens already spend much more of their disposable income on food compared to Americans. To an even larger extent, Sri Lanka’s policy of banning synthetic crop protection in a short period of time has laid out how green farming policies transform a thriving economy into a nation dependent on foreign food aid.

Americans must understand that our food system is conditional on productivity and safety, both things that crop protection tools make possible. In fact, pesticide use is not comparable to how it used to be in the 60s: according to USDA, pesticide persistence has been cut in half in the last 60 years, and chemical pesticide use has been reduced by 40% (per acre). In essence, this means that we have produced more food with less land and less crop protection. We should trust farmers and experts to improve this even further without the need for blanket bans that hurt consumers.

Bill Wirtz is a policy analyst for the Consumer Choice Center. Follow him on Twitter @wirtzbill

A version of this article was posted at The Center Square and is used here with permission. Check out The Center Square on Twitter @thecentersquare

‘Lessons in Chemistry’: New Apple TV series based on best-selling book has opportunity to skewer sexism while challenging the ‘nerd stereotype’

screenshot am

I loved Lessons in Chemistry, the hit novel by Bonnie Garmus, and I’m thrilled that Apple TV+ picked it up “straight-to-series” more than a year before it was published in March 2022. Executive Producer Brie Larson, of “Room” and “Captain Marvel” fame, stars as chemist-turned-TV-cook Elizabeth Zott.

The book is hilarious, fast-paced, and expertly plotted. But while the feminist message is obvious, the subtext simmers with a disturbing “othering” of scientists. Let’s see what happens with the TV version, which debuts in 2023.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Book synopsis

In 1956 Elizabeth Zott works at the Hastings Research Institute in Commons, California, “EZ” emblazoned on her lab coat. She has a master’s in chemistry, which in science generally means failing to pass qualifying exams — sometimes it’s even called a “terminal masters,” like a cancer

When she hunts for spare beakers in the lab of star chemist Calvin Evans, he assumes she’s a secretary. Two weeks later, they bump into each other at an operetta and Calvin, sick from something he ate and after his date bolts, promptly barfs on her. 

The two share interests, traumatic upbringings, and a physical attraction that neither at first wants to acknowledge. But they bond (more a covalent sharing than an ionic exchange). She tends to get on her soapbox, lamenting the system that keeps women out of science. 

“’You’re saying,’ he said slowly, ‘that more women actually want to be in science,’” Calvin probes incredulously. 

Scenes from Apple TV series premiering 2023

Tragically, Calvin dies on an early morning run, tripping and then hit by a patrol car, Elizabeth pregnant but not yet realizing it. 

A few years later, through a chance encounter at their daughter’s kindergarten with a parent who is a TV producer, Elizabeth ends up hosting a cooking show, “Supper at Six.” She sees creating meals as a platform to teach about science… and life.

The first episode inspires an avalanche of calls from puzzled viewers asking what CH3COOH is. (Vinegar, aka acetic acid.) If spellcheck had been around in the 1950s, Elizabeth could have mentioned food additive sodium citrate — Na3C6H5O7 — and come up with NACHOS. 

“Cooking is not an exact science,” Elizabeth begins another episode. “The tomato I hold in my hand is different from the one you hold in yours. That’s why you must involve yourself with your ingredients. Experiment: taste, touch, smell, look, listen, test, assess.” That’s remarkably similar to the scientific method. She launches into a recitation of how heat and enzymatic activity yield something yummy. 

A chicken pie is “a mixture, which is a combination of two or more pure substances in which each substance retains its individual chemical properties.” The carrots, peas, onions and celery are “mixed yet remain separate entities.”  

Supper at Six is an instant smash hit. 

Elizabeth Zott is gorgeous, athletic, wears trousers (a no-no in the 1950s), and sports a pencil in her upswept hair. She has no sense of humor or subtlety and is, well, unlike most women of the era. 

Early in the story, when a large mutt follows her home from a deli and Calvin asks, “Who’s your friend?” Elizabeth thinks he’s wondering why she’s late and answers “It’s six thirty.” And that becomes the canine’s name. 

The dog is smart, although he flunked out of bomb-sniffing school at Fort Pendleton. Elizabeth teaches him to understand hundreds of spoken words, and he becomes a major character, a little like a journalistic fourth wall. Later, when an unemployed Elizabeth converts her kitchen into a lab, where she performs a complex extraction process to make coffee, she and Six-Thirty wear goggles. 

Elizabeth links her culinary chemistry to life lessons, which is what draws her massive audience of bored housewives. Consider this telling description:

“The potato’s skin is teeming with glycoalkaloids, toxins so indestructible, they can easily survive both cooking and frying. And yet I still use the skin, not only because it’s fiber rich, but because it serves as a daily reminder that in potatoes as in life, danger is everywhere.”  

Equally scintillating is Elizabeth’s description, to Six-Thirty, of albumin coagulation as she cooks egg whites. 

Vague yet jargony  

I had read only a few paragraphs of Lessons in Chemistry when I began to cringe, as I did when finally watching Big Bang Theory after well-meaning non-scientist friends repeatedly suggested that my husband Larry, a chemist, and I, a geneticist, would enjoy the nerd stereotype. We didn’t. 

The novel quickly unfurls a curious dichotomy of too-general terms, such as publishing in “Science Journal,” against a torrent of unnecessarily multisyllabic words. 

Elizabeth describes herself as a chemist, working in Chemistry departments. But once we learn her research interests, it’s clear she would have more likely been in a department of Biological Chemistry or Biochemistry, or if a little later, Molecular Biology.

The narrative relies on the stereotypical objects of science, the things stuffed into toy chemistry sets, festooning Elizabeth’s digs with Bunsen burners, flasks, and rubber tubing. “Scientific proof,” “believe in science,” and “breakthrough” repeat, language that non-scientists often use that annoys scientists.

A glaring missed opportunity to educate

The only flaw of the book, in my admittedly minority opinion, is glossing over Elizabeth Zott’s professional interest, which is exactly the area that propelled me and many others to pursue careers in science. The author thanks two friends, an “amazing chemist” and a “brilliant biologist,” for their technical advice. Had they really not heard of the Miller-Urey experiment? Or had Garmus or her editors dismissed it? 

Elizabeth’s research passion is deciphering the role of RNA in the origin of life. “Abiogenesis,” referring to the first cells arising from chemicals, is repeated ad nauseum, yet fleshed out only fleetingly, a mere sentence. 

Garmus missed a great opportunity to build her story around one of the most exciting experiments of all time. 

In 1953, 23-year-old biochemistry graduate student Stanley Miller, at the University of Chicago, mixed simple chemicals that likely had been the types in the early Earth’s atmosphere. He brewed a “primordial soup” of methane, ammonia, hydrogen and water in a glass contraption and lit a spark — a recipe for life, perhaps. 

Credit: Ars Technica

Within a day, the soup ingredients reacted, forming new molecules that included amino acids, the building blocks of proteins. Variations on the theme, from Miller and others, generated RNA. That starting molecule then could have copied itself into a forerunner of DNA with a simple base swap and doubling into a helix. Since both RNA and DNA participate in directing cells to make proteins, it’s tantalizing to envision the molecules connecting and interacting in the sun-baked muck of ancient clays, sculpting the first cells. 

imageStanley Miller wasn’t the first to think of how to recreate life’s beginnings. Soviet biochemist Alexander Oparin came up with abiogenesis in 1924. 

But Miller’s results were easy to replicate in a college cell biology lab in 1973, when I did it. The thrill at detecting the amino acids, smears on a chromatography strip, cemented my future in genetics, the field that connects chemistry to biology.  

I was fortunate to interview Stanley Miller in 2000. He told me that his adviser, Harold Urey, took his own name off the paper they submitted to Science, “A Production of Amino Acids Under Possible Primitive Earth Conditions,” so that the young man would receive the credit. 

image
Credit: Chemistry World

Miller had been astonished when his experiment made headlines proclaiming that he’d created life. Media coverage used the term abiogenesis. “People made jokes. They suggested that I’d grown a rat or a mouse in there!” he told me. Miller died in 2007. 

Another opportunity missed is to have Elizabeth working on origin-of-life scenarios with nary a mention of Watson and Crick’s famous 1953 paper introducing DNA as the genetic material. Oops!

Me, too: Relating to three major events in Elizabeth Zott’s life 

To women scientist readers, I suspect much of Lessons in Chemistry rings disturbingly true. Three major events in Elizabeth’s life also happened to me, in a sense. That’s part of why the book made me uneasy.

1. Sexual assault

Elizabeth Zott exits with a master’s and not a PhD from UCLA because her mentor raped her, which her department deemed an “unfortunate event” and blamed her. In college, in the genetics lab, my mentor put his hands on me, but I kicked and ran. In the pre-Weinstein 1970s, these things happened but weren’t mentioned.

2. Stealing her discoveries

At Hastings, Elizabeth works with two incompetent men on abiogenesis. A benefactor wants to fund the research, so Elizabeth’s boss accepts the funds for “Mr. Zott,” knowing that no one would support a woman. But then he has to fire Elizabeth because she’s unwed and pregnant. “I’m not contagious. I do not have cholera. No one will catch having a baby from me,” she implores, but is canned anyway. She tutors her befuddled former co-worker in her home, but when she briefly leaves him alone in her office to tend to her newborn, he steals her files of results and gives them to their boss, Dr. Donatti.

A few months later, the benefactor wants faster results, and so Elizabeth is hired back, but demoted to lab tech. Then Donatti publishes her findings in Science Journal:

She read the article twice just to make sure. The first time, slowly. But the second time she dashed through it until her blood pressure skipped through her veins like an unsecured fire hose. This article was a direct theft from her files.

The same thing happened to me.

One day, a botany textbook arrived in the mail. I opened to the first chapter, and saw 42 pages of my own writing. Every. Single. Word. Except where my intro biology textbook read “biology,” the new book had “botany.” Had our publisher somehow mixed the books up? Not likely. We authors read and approve every step in the production process.

The botanist who stole my work was prominent, popular, and personable, one of those types whose Wikipedia page lists his many awards (I’ve never won an award). For years after he co-opted my 42 pages, I’d stand at the back of the lecture hall at biology conferences, watching him preen.

Two years later, he did it again, stealing an essay I’d written on Barbara McClintock, famed corn geneticist and Nobelist. Only a woman could have written it. So, I can relate to Elizabeth Zott’s pit-in-the-stomach rage.

3. A precocious daughter

Heather, as a preschooler, would sit behind the stage of the lecture hall where I taught intro biology. In middle school, when the teacher said the word “prophase,” Heather chanted “metaphase, anaphase, telophase,” her mantra the stages of cell division.  When the astonished teacher asked how Heather knew that, she said, “Doesn’t everyone?”

Elizabeth Zott’s influence was more direct. She read to Madeline from On The Origin of Species. In kindergarten, the child asked the school librarian for books by Norman Mailer and Vladimir Nabokov.

My favorite part of Lessons in Chemistry is when Madeline mulls over her teacher’s failure to grasp the fact that humans are animals, something that I’ve pointed out to journalists writing about “humans and animals” for years. Even Six-Thirty knows this.

All humans shared a common ancestor. How could Mumford (the teacher) not know this? He was a dog and even he knew this.

When Madeline insists to her teacher that humans are animals, she’s punished. But when the child writes on a poster that “Inside, humans are genetically ninety-nine percent the same,” her mother admonishes her, for we are ninety-nine-POINT-NINE the same. “In science, accuracy matters,” Elizabeth tells her daughter.

Coda

After a few twists and turns, the ending to Lessons in Chemistry unspools after Elizabeth speaks with a journalist who finally gets her. “That’s why I wanted to use Supper at Six to teach chemistry. Because when women understand chemistry, they begin to understand how things work,” she tells him.

I won’t spoil the glorious ending. But in the last two chapters, Elizabeth’s work is finally recognized, and Calvin’s tragic upbringing clarified. 

Elizabeth softens a bit by the end, but the contradiction remains: women equal men, but scientists do not equal non-scientists. We stand a breed apart, with our constant observing, questioning, testing, and hypothesizing anew, in our own, often multisyllabic, language. The othering of scientists persists.

I hope the TV version of Lessons in Chemistry delves more into the mind of the female scientist, analyzing what drives her innate curiosity. 

Hey Apple TV, need a consultant?

Ricki Lewis has a PhD in genetics and is a science writer and author of several human genetics books. She is an adjunct professor for the Alden March Bioethics Institute at Albany Medical College. Follow her at her website or Twitter @rickilewis

‘Sustainable fondue’: Cattle and dairy farming produces billions of tons of CO2 each year. Here’s how climate-friendly lab-grown cheese can change that

Hi, I’m Will, and I have a cheese addiction. My partner regularly catches me swiping slices when I am cooking, and, like many of you, the thought of living without pizza, parmigiana or fondue is horrifying to me.

But, like many of you, I am deeply worried about climate change. So when I found out how bad cattle and dairy farming is for the planet my heart broke.

The cattle and dairy industry produces more than double the carbon emissions of shipping or aviation.

Dairy’s climate hoofprint

Before we dive into Formo, we need to appreciate the environmental cost of cattle and dairy farming.

Let’s first look at their greenhouse emissions.

Credit: BBC

In total, the global cattle and dairy industry produced 1.7 billion tonnes of carbon dioxide in 2015 — more than double the carbon footprint of shipping or aviation. To make matters worse, this figure doesn’t include the methane they produce, which is a vastly more potent greenhouse gas than CO2.

But the bad news doesn’t end there. Cattle farming is one of the leading causes of deforestation in the Amazon, and the crops used to make their feed are also grown in environmentally dubious ways.

While they’ve clearly gotten a lot better in recent years, plant-based alternatives just don’t cut it for me and most consumers who are used to the real deal. So do we have to give up our delightful brie and indulgent mac and cheese to save our beautiful planet?

Maybe not!

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Biotech to the rescue

Companies like Good Meat can grow everything from succulent chicken breasts to sizzling steaks in the lab. They take sample cells from a living animal and grow them in the lab. This results in cultures of fat and muscle cells. Then they assemble these new cells into the shape of the meat needed, and voila, you have real meat without the animal!

Ethical and climate concious meat? Credit; Good Meat

This incredible technology can produce far fewer carbon emissions and use much less land than traditional livestock farming, while having virtually the same nutrients as normal meat. This is why it is considered by many to be the future of meat.

Now a company called Formo is trying to do the same for cheese.

Formo’s modified microbes. Credit: Formo

They use genetically modified microorganisms to produce milk protein rather than alcohol. They then “brew” them in a bath of nutrients and plant-based feed, and after it has finished, they strain out the milk proteins.

By combining these proteins with water, plant-based fats and carbs, and a pinch of salt, they have created a completely lab-grown milk. This can be processed just like cow’s milk to make all sorts of cheeses!

Formo says its process also uses orders of magnitude less land and produces far fewer carbon emissions than dairy farming. What’s more, their cheese tastes like normal cheese and has near identical nutritional value! So soon we will be able to stuff delicious, cheesy lasagna into our faces, guilt-free.

Formo Cheese in a toastie. Credit: Formo

Like cultured meats, Formo is still in its very early days. This means their facilities aren’t at full industrial scale yet, and the products they make are expensive.

However, if technology like this can scale and become competitive on cost, it could not only allow cheese lovers to become far more environmentally friendly, but potentially reduce deforestation and food supply chain issues.

Temperatures do seem to be trending upward. If continued this will impact the dairy industry along with the rest of us. Credit: NOAA National Centers for Environmental Information

So it is good to know my crippling cheese addiction may soon be far more sustainable. But Formo’s incredible technology also shows just how ingenious we humans are and how a wave of new breakthrough inventions may be about to revolutionize our world and secure our planet’s future.

William Lockett is a freelance writer based in the UK, covering stories involving technology, electric vehicles, renewable energy, AI, and more. Follow William on his Medium page here

A version of this article was posted at Freethink and is used here with permission. Check out Freethink on Twitter @freethinkmedia

Teenage brains are a cauldron of change: Here’s what happens on the inside and how it affects our looks and behavior

A lot happens when you reach puberty. Your voice may change and you will experience hair growth on parts of your body. Your period will start. Your boobs, penis and scrotum will grow larger.

But it’s not just your body that’s physically changing. The brain also undergoes major changes.

Understanding what is actually going on can be helpful when you are right in the middle of it.

Especially because this important organ can both create trouble and give you opportunities that adults do not have.

Like paths in a forest

unnamed fileSince the moment you were born, your brain has been developing, Marte Roa Syvertsen tells sciencenorway.no.

She is a doctor and neuroscientist at Drammen Hospital. Now she has written the book “The Teenage Brain” (Ungdomshjernen).

When you learn things, new connections between brain cells form.

Everything you know and everything you have experienced become small networks of brain cells.

Like paths in a forest.

Cleanup in the brain

But when you reach puberty, something strange happens. The brain actually begins to get rid of some of what you learned as a child.

“The weak paths disappear,” Syvertsen says.

For example, if you learned a language when you were small child that you never really used, it may disappear.

This cleanup has an advantage. Instead of being a little good at many random things, you can become really good at what you need most.

Faster learning

At the same time, the adolescent brain is made for efficiently learning new things.

Because something else also happens when you reach puberty. The connections between brain cells are made faster than in children and adults.

“Simply put, a teenager’s brain does not need as many repetitions for the connection to be strong and robust,” Syvertsen says.

Just think of how much faster a teenager learns to use a new phone than an adult, she points out.

The ‘boss’ of the brain is not fully developed

Everything is in place for you to become really good at new things. But the adolescent brain can also work against you.

The front part of the organ, the frontal lobe, is the last part to become fully developed.

“The frontal lobe is the boss of the brain,” Syvertsen explains.

It is this part of the brain that makes decisions, suppresses emotions, and initiates actions.

parts of the brain
Credit: Thing Link

Too little or too much homework

The frontal lobe can, for example, decide that you should do your homework even if you do not want to.

Or the opposite.

Because if you work too much with schoolwork, it may also be because the frontal lobe does not help you say stop.

Maybe it would have been better for you to prioritise friends or other activities.

Ignore the impulses

This little ‘brain boss’ is unique to us humans.

Animals are mostly controlled by instincts, such as eating and relaxing or mating to have children.

Humans, on the other hand, may decide to ignore these impulses.

We use reasoning instead.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Up to one-meter-long threads

Only when you are around 20 years old can the frontal lobe be considered fully developed.

But from the moment you are born, the frontal lobe constantly makes new connections with the rest of the brain.

One brain cell in the frontal lobe may be bound to thousands of brain cells elsewhere in the brain.

And the long threads can be up to a meter long, Syvertsen explains.

Emotions in high gear

But we have not yet talked about the biggest difference between the adolescent brain and other people’s brains.

Namely the emotional side of it.

The hormones that cause your body to change also affect the brain.

The part of the brain that controls emotions simply goes into high gear because of them.

“They’re going full throttle with little braking,” says Syvertsen.

More afraid, angrier, happier, and sadder than adults

Even though your frontal lobe is better developed now than it was when you were a child, it has been given a more difficult task.

A young person who experiences something frightening does not have the same opportunity to use common sense to calm their reaction.

It means you become more afraid, angrier, happier, and sadder than adults.

Of course, you can still use reasoning to defy your feelings, but it can be good to know that it is only natural if this is difficult.

Intoxication of risk

Many teenagers also take more risks than children and adults.

The explanation for this is also found in the brain.

Specifically in the reward system, which is located more towards the centre of the brain.

This system secretes dopamine when you do something exciting or make a new accomplishment. It is a kind of natural intoxication.

Desire for kicks

And as a teenager, you get this reward easier than adults.

Therefore, the desire for kicks can be stronger than reasoning.

“This does not mean that young people do not think about consequences, but the ‘here and now’ counts most,” Syvertsen says. 

“Be aware of what strengths you have”

Syvertsen’s book about the teenage brain is aimed mostly at parents. She has nevertheless included some advice for teenagers who find these changes difficult.

“It will pass” is the first piece of advice in the book.

“It is a phase, and it will pass on its own,” Syvertsen tells sciencenorway.no. “It applies to you, but also those around you.”

Finally, she wants to encourage teenagers to seize the opportunity that their brain gives them.

“Be aware of what strengths you have and the doors that are open. You have a shorter way to go to get good at things.”

Eldrid Borgan is a journalist from Forskning, Norway. 

A version of this article appeared originally at Science Norway and is posted here with permission. Check out Science Norway on Twitter @Sciencenorwayno

Podcast: BMI useless? Lab-grown meat a ‘pipe dream;’ Did early humans eat each other?

Using body mass index (BMI) to assess a patient’s health may yield misleading results and undermine public trust in medicine, researchers claim. Lab-grown meat has been heralded as solution to agriculture’s environmental footprint, but the technology may not live up to the hype. One anthropologist says he’s uncovered evidence suggesting that early humans ate each other. Did our ancestors really engage in cannibalism?

Join geneticist Kevin Folta and GLP contributor Cameron English on episode 195 of Science Facts and Fallacies as they break down these latest news stories:

Doctors and public health officials have used body mass index (BMI) to assess obesity for decades. In recent years, though, BMI has come under fire as an unreliable and unhelpful tool. Some researchers now argue that this single metric doesn’t tell physicians very much about their patients’ health. Urging overweight individuals to slim down based on BMI may “harm” them emotionally and discourage them from seeking medical care when they need it. Is it time to ditch BMI and replace it with more holistic measurements of health?

Is meat more sustainable if it’s grown in a lab instead of raised on a farm? While there isn’t enough evidence to conclusively answer that question, two other issues could limit the technology’s impact on our dietary habits: cost and consumer acceptance. Producing meat in a laboratory is incredibly expensive at present, and many consumers have said they won’t give up their traditionally produced steaks and burgers even if those foods could be mass produced without animals. Can the nascent “alternative protein” industry overcome these hurdles?

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Although we consider cannibalism unthinkable today, recent anthropological research has uncovered “evidence of butchery, gnawing, filleting, and cooking on human bones at sites around the world,” science writer Ross Pomeroy recently reported for BigThink. This data invites an uncomfortable question: did early humans eat each other?

Kevin M. Folta is a professor, keynote speaker and podcast host. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Visit his website and follow ACSH on Twitter @ACSHorg

Nature, nurture and old age: How much is the human lifespan driven by our genes?

The research used our old friend, the UK Biobank, a repository of genetic information on a large number of Brits, as well as a similar genetic registry in Finland, the FinnGen. First caveat, the findings do not necessarily apply to racial and ethnic groups other than those studied. Because the researchers were interested in the loss of lifespan associated with habits and genes, they used quality-adjusted life years (QALYs) evil twin, disability-adjusted life years (DALYs) – the cumulative years lost to disability.

Second caveat, because we are dealing with lots and lots of genetic profiles, there are any number of estimates and assumptions that go into these calculations – consider their qualitative rather than quantitative meanings. Among the assumptions,

  • Most importantly, these genetic associations are causal,
  • DALYs “are a valid and meaningful measure of disease burden.”
  • DALYs lost from a specific disease are the same for genetic and lifestyle causes.

You may apply grains of salt, which by the way, is associated with an increase in DALYs, to taste.

Final caveat, the researchers linked the population’s genetic data with the incidence of common diseases as measured by the Global Burden of Disease. Therefore, conditions that are more common and begin earlier in life will result in a more significant increase in DALYs; a greater loss of quality lifespan.

Common variants, genes with the highest association and probability of being causal with a disease, had a relatively small effect – a loss of 3 months. This includes the genes associated with “six traditional risk factor traits” (body mass index (BMI), glycated hemoglobin (HbA1c), high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, systolic blood pressure and cigarettes per day).

Rare deleterious variants, genes we know are associated with disease, like BRCA1 for ovarian and breast cancer [1]. Here DALYswere much greater, resulting in anywhere from two to ten or more years of life.

Polygenic scores measure many genetic variants associated with population-based traits or diseases. It is a catch-all grouping that you can read more about here or here. Here the researchers reported the difference between those in the bottom 10% of the cohort and the top 90% of the cohort.

genes

To add to the complex meaning of these numbers, genetic variants associated with a shorter lifespan were, ready for it, the greatest source of increased DALYs – a circular definition, to be sure. Within that larger category, as depicted in the bottom left, ischemic heart disease was the greatest contributor resulting in a loss of 15 months of quality life. The contribution of other disorders like stroke and type 2 diabetes is one to two months. Interestingly, variants associated with multisite chronic pain were the second greatest source of disabled years led by low back pain – this is not surprising since the incidence of low back pain globally is 7.5% (for context, ischemic heart disease impacts 1.7% globally.

Sex-specific effects – gender resulted in different polygenic scores and common variants. The researchers found differences in the loss of quality life between the sexes, but this seemed to be due to a difference in their incidence rather than their disease course – more women developed Alzheimer’s Disease earlier than men resulting in the differences in DALYs; it wasn’t a more “severe” or different disease for the women over the men.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Enough with the data – Is it nature or nurture?

by translating information on genetic risk into expected healthy life years lost, genetic risk factors can be put in the larger context of traditional risk factors….

Individuals with rare deleterious variants have the greatest risk of disabled lifespans. Medicine already knows this, and we take active measures to screen and identify these individuals.

For most of us with common variants, modifiable risk factors are a more significant source of our disability than our genes. As I have highlighted, an elevated LDL, being overweight, and hypertension all increase your risk individually more than any of the genetic variations.

gene

These should be good news because all of these can be modified through lifestyle and medications.

Notes:

[1] Others included “LDLR (ischemic heart disease), BRCA2 (breast, ovarian, liver and prostate cancer, and COPD), MYBPC3 (cardiomyopathy and myocarditis), BRCA1 (breast and ovarian cancer) and MLH1 (colon and rectum cancer).”

Source: Genetic risk factors have a substantial impact on healthy life years Nature Medicine DOI: 10.1038/s41591-022-01957-2 

Dr. Charles Dinerstein, M.D., MBA, FACS is the Medical Director at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon. He completed his MBA with distinction in the George Washington University Healthcare MBA program and has served as a consultant to hospitals. 

A version of this article was originally posted at the American Council on Science and Health and has been reposted here with permission. The American Council on Science and Health can be found on Twitter @ACSHorg

Are Americans too complacent about a winter surge of COVID infections — and deaths?

To the old saying about the inevitability of death and taxes, we should add another: another health crisis linked to COVID-19. As of the end of October, the CDC’s official tally of U.S. COVID infections was just under 100 million, but with many positive home test results unreported, the real number is estimated to be several times greater.

Infections, while unfortunate and sometimes deadly, do provide immunity to survivors, but only for a limited time. Natural post-infection and vaccine-induced immunity wane over a period of months, so that a large fraction of the U.S. population remains potentially susceptible to infection.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Reason for concern?

A swarm of new Omicron subvariants is now emerging, and some may be very dangerous. But fewer than 10% of eligible Americans have taken advantage of the new bivalent COVID vaccine boosters, which enhance resistance to all the known Omicron subvariants as well as Omicron’s predecessors. Yet, fewer and fewer people are opting to get a booster. This makes a winter surge likely, and it could get scary.

Most people who have had COVID have recovered within a few days or, at most, weeks, but some, even those who had mild infections, can have persistent, debilitating symptoms that last months or even years.

The most common long COVID symptoms are fatigue, fever, cough and difficulty breathing or shortness of breath. Less common problems are “brain fog,” headache, stroke, sleep problems, loss of smell and taste, depression or anxiety; joint or muscle pain; cardiovascular symptoms such as chest pain and palpitations; digestive symptoms; new-onset diabetes; and blood clots in various organs.

Credit: CDC

Long COVID occurs in around five percent of those who have been infected. Note that long COVID is persistence of symptoms after acute infection, while the CDC figure in the above graphic refers to the appearance of new conditions sometime after the acute infection

Long Covid causes extended pain and suffering in many millions of Americans and severe symptoms in perhaps a million who are unable to work. Not surprisingly, there are elevated numbers of “excess deaths” above what are predicted from historical data.

A vaccine still remains the first line of defense. “The best way not to have long COVID is not to have COVID at all,” says Leora Horwitz, MD, a professor of population health and medicine at New York University’s Grossman School of Medicine. And vaccination lowers the likelihood and severity of infection, as well as of hospitalization and death in those who do become infected.

However, only about twothirds of Americans have had the primary series of two vaccinations, and the effectiveness of those shots has waned significantly. Only about nine percent have had the updated bivalent booster that offers the strongest protection against known variants.

How much protection do the latest vaccine boosters confer?

A modeling study by the Commonwealth Fund illustrates vividly the importance of greater uptake of the bivalent vaccine booster. It found that if about 54% of Americans will have taken it by the end of this year, by March 31, 2023, some 75,000 deaths would be prevented, about three-quarters of a million hospitalizations averted, and some $44 billion in direct medical costs saved.

The graphic representation of the study’s results is striking. The two graphs predict projected seven-day rolling average of COVID-19 hospitalizations (left graph) and deaths (right graph) in the U.S., under different booster vaccination coverage scenarios. They show projected hospitalizations and deaths from COVID-19 with the current rate of COVID bivalent boosters (upper curve), a rate equal to the frequency of flu vaccination last year (approximately 54%, middle curve), and 80% coverage of the bivalent booster (lowest curve).

image
Baseline: vaccination continues at current daily rate
Scenario 1: flu-like vaccination coverage of booster eligible
Scenario 2: 80% vaccination coverage of booster eligible

[Note: In the baseline scenario, vaccination rates are held constant at the average of the daily vaccination rate for August 2022 until the end of March 2023]

The data point to an indisputable conclusion: A full court press to convince skeptical Americans to get the latest COVID booster, and a solid response, could save thousands of lives and billions of dollars.

As shown by the lowest of the curves in both graphs, with 80% uptake of the bivalent booster vaccines (Scenario 1), the U.S. could virtually eliminate hospitalizations (left graph) and deaths (right graph) from COVID by the end of March 2023. Even a relatively modest increase in uptake (the middle curves) would exert a significant effect, but this goal will become more elusive when the federal government’s paying for shots ends soon. The out-of-pocket cost to uninsured individuals would then be more than $100.

Another benefit of broader vaccination would be that people who get the COVID-19 vaccine but still develop COVID are at significantly lower risk of symptoms of long COVID or the onset of new illnesses, when compared to those who are not vaccinated.

Over the course of the pandemic, the death rate in the U.S. has been the worst among developed, industrialized, countries. We have safe and effective tools to turn that around, but they are no good if we don’t use them.

Henry I. Miller, a physician and molecular biologist, was the founding director of the Food and Drug Administration’s Office of Biotechnology. Please follow him on Twitter at @henryimiller

John J. Cohrssen is an attorney who has served in several government posts in both the executive and legislative branches of government.

‘U-shaped happiness curve’: Do people really get more content with life as they age?

On average, happiness declines as we approach middle age, bottoming out in our 40s but then picking back up as we head into retirement, according to a number of studies. This so-called U-shaped curve of happiness is reassuring but, unfortunately, probably not true.

My analysis of data from the European Social Survey shows that, for many people, happiness actually decreases during old age as people face age-related difficulties, such as declining health and family bereavement. The U-shaped pattern was not evident for almost half of the 30 countries I investigated.

So why the difference?

My study corrects a misinterpretation of research methods in previous studies. The U-shaped idea comes from statistical analyses that adjust data to compare people of similar wealth and health in middle and old age. That adjustment is intended to isolate the effect of age from other factors that influence happiness.

Credit: David Bartram

But given that people often become poorer and less healthy during old age, the adjustment can be misleading. When we omit the adjustment, an age-related decline in happiness becomes evident in many countries.

This decline is steeper in countries with a less effective welfare state. That’s especially true of Turkey, where happiness (measured on a scale from zero to ten) falls on average from 6.4 at retirement age to less than 5.0 among the very old.

For Estonia, Slovakia and the Czech Republic, happiness falls steadily, beginning in people’s early 30s.

For the Netherlands, in contrast, happiness increases from the age of 30 and then holds steady even in old age. In Finland, happiness remains pretty constant across the life course, at above eight on the zero-to-ten scale.

In short, there’s no universal pattern of happiness. Instead, there’s a wide range of patterns across different countries. It shouldn’t be terribly surprising that different social conditions contribute to different outcomes.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Nice story

The U-shaped idea is appealing in part because it’s counter-intuitive: sure, life gets harder in old age, but even so, people get happier. Why? People are said to gain wisdom and acceptance with age. We develop an ability to appreciate what we have, rather than ruminating over what we lack. Age blunts the sharp edge of ambition and the frustrations that often follow from it.

The popular wisdom of psychology tells us that “happiness comes from inside”. So perhaps people finally sort out their “insides” in old age, with happiness as the reward.

It’s a nice story. But for many societies, that apparent outcome is an artefact of a statistical adjustment that isn’t appropriate for this topic. Happiness might increase with age as long as people don’t become ill, experience bereavement, or start to lose their friends. That’s what the statistical adjustment gives us: a result that assumes nothing goes wrong in old age.

But many people do face big challenges as they get older, and it’s not surprising if they then don’t feel terrifically happy.

I’m not suggesting that people don’t sometimes sort out their insides over time. That piece of psychology’s popular wisdom is worth embracing as it is what’s in our control, potentially. But my analysis suggests there might be limits to our ability to compensate this way for the challenges ageing often brings.

Whether happiness rises or falls depends on the balance of these competing forces (big challenges v mental accommodation), and a positive outcome isn’t guaranteed.

To get clarity on the patterns, we need an analysis that reflects what actually happens as people grow old. When we do the analysis this way, the U-shape disappears for many countries – mainly because many people are not, in fact, getting happier as they get older.

David Bartram is an Associate Professor and Director of Research, School of Media, Communication, and Sociology, University of Leicester. Follow David on Twitter @dvbartram

A version of this article was posted at The Conversation and is used here with permission. Check out The Conversation on Twitter @ConversationUK 

‘The Day I Die’: One man’s struggle with Lou Gehrig’s disease and physician-assisted suicide

Covering the walls in Joe’s home office in Vancouver, Washington, were dozens of glossy medals and framed pictures of running meets—a shrine to the hours and days he had spent pushing himself to the crest of physical triumph. [1] Each of them carried vivid memories of carbo-loading dinners, auspicious bib numbers, and sports drinks in all colors of the rainbow. Now they served as blunt reminders of the life he once had, prediagnosis, when the muscles in his body still stretched, lunged, and churned without special effort, or even thought.

hannig book cover
Credit: Anita Hannig

One of the photos shows Joe, probably in his late forties, in dark-blue shorts and a bright-yellow T-shirt. His short hair hemmed in by a white headband, he wears aviator sunglasses and a look of concentration on his face as he sprints the last couple of miles toward the finish line of the Portland marathon. The course flattens out here as runners weave their way through the downtown area to a shrill chorus of bystanders. Right behind Joe, a younger man with wiry legs picks up the chase. But Joe stares straight ahead, and his stride looks strong. It’s clear that he won’t go down without a fight.

By the time I met Joe on a Sunday morning in early 2018, the days of his physical feats were long gone. As I walked up to greet him, he lay immobile in the middle of his living room in a dark leather recliner. A brown duvet was pulled up to his neck, warming a body that wouldn’t obey him anymore. Solemn and alert, Joe’s denim-blue eyes peered out at me from behind his round, metal-rimmed glasses. Behind him, the rain drummed on a sliding glass door that opened into a drenched backyard. The rain wasn’t unusual for this time of year, late February in the Pacific Northwest, but it added a brittle layer of gloom to the space.

 

When I had called his partner, Anna, to arrange the visit, she immediately launched into a description of all the roadblocks she and Joe had been facing to qualify him for Washington’s assisted dying law. She was talking fast, like a person frustrated by an unmet desire to be heard. I knew from my research with other families that navigating the bureaucracy of assisted dying laws could be a full-time job. But I was surprised to learn that someone with a clear-cut terminal diagnosis was having such difficulty.

In September 2016, at age 72, Joe was diagnosed with ALS—amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease—a progressive neurological disorder that destroys neurons responsible for muscle movement. In the United States, about 6,000 people are diagnosed with ALS every year, and the disease’s etiology still remains largely a mystery. After the initial onset of symptoms, the average life expectancy ranges from two to five years as the body’s muscle groups shut down one by one.

Joe was polite and soft-spoken. His lopsided smile, encased by a closely trimmed snow-white beard, surfaced only occasionally, when something truly amused him. Half an hour into our conversation, as I asked Joe and Anna to reconstruct the events of the past year and a half, Joe broke into a metallic cough. He motioned his chin toward the glass of water that sat between him and Anna on a side table.

“His mouth has felt exceptionally dry today,” Anna offered by way of an explanation. She picked up the glass and held it for him, waiting for his tongue to catch the straw.

“I think I may have to have a manual cough assist,” Joe whispered, his voice grainy and barely audible. Anna rose to her feet.

“With ALS, you lose your core strength, your diaphragm, which you need for coughing,” Anna said as she peeled the duvet from Joe’s body, baring his pale arms and legs, which had lost nearly all muscle definition. “So we have a machine to clear the mucus in his airway, but Joe has discovered that this little thing we do is actually more effective.”

Anna removed his head and neck rests and pressed a button on his recliner. Joe’s body slid slowly down and out of the chair. When his furry moccasin slippers hit the ground, she hoisted him up—he could still stand, but his upper body was so limp that he almost toppled over—and scooped him up from behind. Though Anna was much smaller than Joe, she didn’t seem to be straining. Joe was thin as a wafer.

Joe hung in Anna’s arms like a rag doll—his head drooping low on his chest, shoulders perched forward, and arms sagging by his sides like two power lines cut down by a storm. He used to be a tall man, but the disease had compressed his spinal cord.

Anna tightened her embrace and folded her hands flat on top of each other underneath his rib cage. “Ready?” she asked.

Joe murmured his response, looking straight down. On the count of three, Anna dug her hands into his abdomen in a quick upward motion while Joe tried to clear his throat. A feeble sound emerged. She pressed in again. Again.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

“It takes timing, but, you know, we’re dancers. We’ve got timing,” she said, shooting me an apologetic glance. I told her that I didn’t mind, wishing she could forget about me for a second, a witness to their tragic dance.

The fourth time, Joe emitted a forceful, guttural cough. “That was a good one,” Anna said, carefully releasing her embrace and helping Joe sink back into his recliner. He was still looking down, his neck muscles no longer strong enough to keep his head upright.

“You can see this is just a laugh a minute, isn’t it?” Anna said, trying to diffuse the lingering intimacy between us. She tucked her straight, silver hair behind her ears and stole a quick glance at Joe. He cracked a smile, fleeting but just in time for her to catch it.

Anna settled Joe back into the chair, propping his head up with two pillows and his neck brace. She knew exactly how to arrange everything. You could tell she had done it a hundred times. The whole ordeal had made Joe thirsty, and he had her lift the glass of water to his lips again so he could bite down on the straw. Joe closed his eyes. For a second, I wondered if he was going to drift off to sleep, but then he opened them again and looked at me, ready for the next question.

In the midsummer of 2016, Joe noticed weakness in his left hand during his weekly pickleball game. Pickleball combines elements of badminton, tennis, and table tennis. Players wield wooden or composite paddles on a small court. Joe was right-handed, but he couldn’t get a good grip anymore when he used his left hand to pick up the ball. Maybe a pinched nerve, he speculated. But when the sensation didn’t go away weeks later, he went to see a neurologist.

Anna was standing in the kitchen when Joe returned from his appointment. She had no reason to feel alarmed. She thought he was going to tell her that they had scheduled an MRI or X-ray for him. But he had other news for her.

“Everybody has to die some time,” Joe said.

“What?” Anna felt the ground turn unsteady beneath her feet.

“The doctor is pretty sure, in fact, he’s certain, that I have ALS.” Joe understood that ALS was fatal, but he didn’t have any idea of how the disease would progress. His neurologist hadn’t gone into much detail with him that day.

Anna clutched the counter and burst into tears. She knew exactly what an ALS diagnosis meant. One of her elementary school friends, a dentist, had died of ALS in his late thirties. She knew that ALS led to paralysis and went from one part of the body to another while the brain remained completely active—“a cruel joke,” she called it.

Tom Samuels, one of the pulmonology specialists Joe started seeing months later, had long studied the unique challenges an ALS diagnosis brings. When I met Samuels at a coffee shop in Portland, he explained that ALS impacts three different groups of muscles.

“ALS affects the extremities, so arms and legs,” he said. “In about two-thirds of patients, that’s the first thing they notice. They may get a little foot drop or weakness in one hand that progresses. About a third of all patients, the first thing that goes is their speech and the ability to swallow. It’s called bulbar onset. Then, about one percent of patients, the first thing they notice is shortness of breath, because the breathing muscles become affected.”

“But everyone eventually gets everything,” he added. “If they live long enough.”

I asked Samuels what patients with ALS eventually die from.

“Ninety-nine percent die from respiratory failure,” he said. “The breathing muscles get so weak that they can’t get carbon dioxide out. So the carbon dioxide builds up. The good news about that is that carbon dioxide acts like morphine on the brain—it basically puts the brain to sleep. It’s usually not a sudden episode; they usually have a buildup, lapse into a coma, and then they die. But occasionally people will choke to death. It’s a miserable way to die.”

Samuels’s explanation made me see Anna’s Heimlich maneuver in a new light. Had Joe been terrified of choking to death the morning I met him? If most patients with ALS could expect a death from respiratory failure, as Samuels suggested they did, then running out the clock on an ALS diagnosis didn’t seem like an especially comforting option.

In a society that trades in military metaphors when talking about illness—when cancer patients become “warriors” and “survivors” and cells foreign “invaders”—the idea of foregoing even the sliver of a chance for more time can feel like premature surrender. After all, the next miracle cure could be just around the corner, the next experimental drug one clinical trial away.

“Abuse of the military metaphor may be inevitable in a capitalist society,” wrote philosopher and writer Susan Sontag. “War-making is one of the few activities that people are not supposed to view ‘realistically,’ that is, with an eye to expense and practical outcome. In all-out war, expenditure is all-out, unprudent—war being defined as an emergency in which no sacrifice is excessive.”

Turning your back on the powerful cultural machinery of life extension can feel like trying to swim upstream. That’s why some assisted dying patients struggle with being perceived as “quitters” when they push back against the ubiquitous logic that more time is always better. The decision to stop “fighting” to reclaim what life someone is able to have here and now almost amounts to an act of conscientious objection.

For his part, Joe had no intention of running the full gamut of what ALS held in store for him: the air hunger that would only get worse and the prospect of becoming “locked in”—when he would no longer be able to move or speak or swallow but would remain mentally alert. Joe’s initial frustration over not having any viable treatment options soon gave way to a decision. He wouldn’t let this brutal disease get the best of him, only waiting for the other shoe to drop—not if he could help it. He would be the one to decide when and how he was going to die.

He would hasten the end of his life.

Anita Hannig is an anthropologist who studies illness, death, and dying from a cultural perspective. She is an associate professor at Brandeis University, where she teaches classes on medicine, religion, and the end of life. Follow Anita on Twitter @AnitaHannig

A version of this article originally appeared at Sapiens and is posted here with permission. Find SAPIENS on Twitter @SAPIENS

Podcast: NYT attacks another scientist; How we got ‘GMO’ insulin; Why is gene therapy so costly?

The New York Times last week alleged that a high-profile scientist is in cahoots with the meat industry. Is there any truth to the allegation? Genetically engineered insulin was approved 40 years ago. What have we learned about drug regulation since then? Four gene therapies have received FDA approval so far. Will we see more of these novel medicines on the market in the coming years?

Join geneticist Kevin Folta and GLP contributor Cameron English on episode 194 of Science Facts and Fallacies as they break down these latest news stories:

Repeating recent history, the New York Times last week attacked a widely respected scientist for allegedly taking money from corporations to defend their products. “He’s an Outspoken Defender of Meat. Industry Funds His Research, Files Show,” the paper wrote about UC Davis air-quality specialist Dr. Frank Mitloehner. The problem? Nearly everything in the story was completely false.

Insulin produced from genetically engineered microbes hit the market roughly four decades ago. It greatly expanded access to the hormone and earned regulatory approval in a record-breaking five months, instead of the usual 30 months. According to the FDA scientist who oversaw the approval of this novel drug, known commercially as “Humulin,” the story illustrates the power of biotechnology to save lives—and how unnecessary regulation can stifle innovation.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Gene therapies for debilitating diseases are poised to change the way we think about medicine forever—at least that’s what we’ve been told for many years. Despite all the hype, just four gene therapies have received regulatory approval in the US. Why have these drugs  advanced at a snail’s pace?

Cost is one barrier, says geneticist Ricki Lewis. There are also questions about the efficacy of gene therapies; for instance, are their effects long lived, and how many patients will respond to said treatments? Can scientists and regulators overcome these challenges? There is hope that the expanding toolbox of genetic technologies will accelerate the development and approval of additional gene therapies.

Kevin M. Folta is a professor, keynote speaker and podcast host. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Visit his website and follow ACSH on Twitter @ACSHorg

Is human intelligence an evolutionary dead end?

The German Philosopher Friedrich Wilhelm Nietzsche was, by all accounts, a miserable human being. He famously sought meaning through suffering, which he experienced in ample amounts throughout his life. Nietzsche struggled with depression, suicidal ideation, and hallucinations, and when he was 44 — around the height of his philosophical output — he suffered a nervous breakdown. He was committed to a mental hospital and never recovered.

Although Nietzsche himself hated fascism and anti-Semitism, his right-wing sister reframed her brother’s philosophy after his death in 1900 as a rationale for subjugation of people that the fascists saw as weak, contributing to the moral bedrock of the Nazi Party and justification for the Holocaust.

nietzche narwhal resized small

Would Nietzsche have been happier — and would the world overall have been a better place — had the philosopher been born some other species other than human? On its face, it sounds like an absurd question. But in “If Nietzsche Were a Narwhal: What Animal Intelligence Reveals About Human Stupidity,” scientist Justin Gregg convincingly argues that the answer is yes — and not only for Nietzsche, but for all of us. “Human cognition and animal cognition are not all that different, but where human cognition is more complex, it does not always produce a better outcome,” Gregg writes. Animals are doing just fine without it, and, as the book jacket says, “miraculously, their success arrives without the added baggage of destroying themselves and the planet in the process.”

Gregg — who holds a doctorate from Trinity College Dublin’s School of Psychology, teaches at St. Francis Xavier University, and has conducted research on dolphin social cognition — acknowledges that human history is marked by incredible breakthroughs that hinge on our intelligence. Yet, nonhuman animals do not need human-level intelligence to survive and be evolutionary successful, as Gregg points out, which is why this trait isn’t more prevalent across species.

He builds his often hilarious, sometimes unsettling, case against human superiority across seven chapters. Each one deals with a unique aspect of our psyches — from our capacity to conceive of our own mortality to our ability to communicate about “a limitless array of subject matter” — and provides ample evidence showing that not only are these mental attributes unnecessary for survival, they’re oftentimes more a liability than a gift.

Our species stands out first and foremost, Gregg begins, for our tendency to ask “Why?” “Of all the things that fall under the glittery umbrella of human intelligence, our understanding of cause and effect is the source from which everything else springs,” Gregg writes. “Why” questions arguably spurred innovations such as agriculture (“What causes seeds to germinate?”), fields of study such as astronomy (“Why is that star always in the same place each spring?”) and the advent of religion and philosophy (“Why am I here? And why do I have to die?”).

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Asking “why,” however, is not necessary for success on either an individual or evolutionary scale, Gregg writes. Other species presumably flourish without it, and many have arrived at similar life hacks as humans, but without seeking a deep understanding of causation. Chimpanzees, birds, and elephants know how to self-medicate with plants, clay, and bark, for example. They do not need to know why these remedies work, Gregg writes, only that they do.

screenshot
Credit: Brett Ryder

Nonhuman animals do not need human-level intelligence to survive and be evolutionary successful, as Gregg points out, which is why this trait isn’t more prevalent across species.

For all the good asking “why?” has done us, Gregg argues, it has also had negative repercussions, by justifying biases such as racism (“Why do humans from different parts of the world look different?”) and, as he writes, by leading us to create technologies that can threaten to destroy us — internal combustion engines, for example. The solution to climate change and other existential threats that we’ve created for ourselves, Gregg points out, will come from the same why-based system of thought that brought about these problems in the first place. However, “it is an open question whether a solution will arrive in time, or if our why specialist nature has doomed us all,” he writes.

Our obsession with morals — “not just that we should behave a certain way, but why we should,” as Gregg puts it — has also generated untold amounts of suffering. Norms that dictate how one should and should not behave within one’s social world exist throughout the animal kingdom. Chickens have pecking orders, for example, but they do not ruminate on whether that system is fair or just, Gregg points out. On the other hand, some species do have sensitivity to social inequity. In one famous experiment, scientists offered Lance and Winter, two capuchin monkeys in side-by-side cages, different rewards for completing the same task. Winter received a grape (a preferred treat), and Lance received a slice of cucumber. As Gregg recounts it, when Lance saw Winter repeatedly receive a grape for the task, she violently threw the cucumber back at the researchers and banged on her cage. “This is evidence that Lance felt as if it was unfair that she was given the lesser food reward for the same task,” Gregg writes. “Lance was responding to the violation of a fairness norm.”

But humans take such social norms to an extreme, Gregg argues. We attempt to enforce universal standards for “right” and “wrong” and spin up elaborate justifications, monitoring systems, and punishments to ensure that others follow those made-up rules. A major problem with morals, however, is that they are subjective and can easily lead to justification for one demographic or cultural group’s oppression of another. From 1883 to 1996, Gregg writes, 150,000 First Nations children in Canada were forcibly removed from their homes and sent to residential schools, where they were subjected to abuse, trauma, cultural genocide and even death. The prime minister under which the atrocities began viewed forced assimilation as a “moral imperative, the best solution for bringing Indigenous children in line with modern Western values,” Gregg writes.

Moral grounds have been used to condone the persecution of minority groups; rationalize genocide; and to advocate for eradication of entire cities, including by dropping nuclear bombs: “The history of our species is the story of the moral justification of violent acts resulting in the pain, suffering, and deaths for billions of our fellow humans who fall into the category of ‘other.’” This stands in contrast to every other species, Gregg continues, which lack “the cognitive capacity to systematically kill entire subgroups of their same-species populations resulting from a formal claim to moral authority.”

Humans, therefore, might be succeeding not because of, but in spite of, our moral aptitude, he writes, having taken the social norms that govern and constrain social behavior in most species to self-destructive lengths.

But the most damning chapter of the book — and my favorite — concerns a special brand of cognitive dissonance Gregg calls prognostic myopia, or “the human capacity to think about and alter the future coupled with an inability to actually care all that much about what happens in the future.” This happens on an individual level all the time. Prognostic myopia is at play, for example, when you decide to stay up late drinking with friends, knowing you have to be up early the next morning. The consequences of that decision only fully hit the next morning, when the alarm goes off and the hangover begins. This is because our brains, like those of other animals, are wired to deal primarily with the here and now.

The real problem, however, is that unlike other species, “our decisions can generate technologies that will have harmful impacts on the world for generations to come,” Gregg writes. Spread across billions of people and coupled with modern technologies, our tendency to live in the moment today is condemning future generations and the world at large to increasingly dire prospects with each passing year. According to the Global Challenges Foundation, as of 2016, there was a 9.5 percent chance that humans will go extinct within the next 100 years. And even if we do survive, we’re looking at a potential warming of 2.7 degrees Celsius by 2100, on a scale that “will render most of the planet uninhabitable,” Gregg maintains. Yet we seem to lack the political will to stop this from happening, he adds, because “the further into the future we go, the less we care.”

imageedit
Credit: The Economist

As Gregg points out, “It is the greatest of paradoxes that we should have an exceptional mind that seems hell-bent on destroying itself.”

Yet evolutionarily speaking, our slide toward extinction isn’t outside the norm. Countless species have come and gone since life began on the planet some 3.7 billion years ago. As Gregg writes, “Our many intellectual accomplishments are currently on track to produce our own extinction, which is exactly how evolution gets rid of adaptations that suck.”

Rachel Nuwer is a science journalist whose writing has appeared in The New York Times, National Geographic, Scientific American, BBC Future, and elsewhere. She is the author of “Poached: Inside the Dark World of Wildlife Trafficking.” 

A version of this article was posted at Undark and is used here with permission. Check out Undark on Twitter @undarkmag

Viewpoint: 10 claims by anti-GMO African campaigners on why crop biotechnology advances should be rejected – and why they are wrong

Over 20 years, Africa’s foray into genetic modification (GM) crop development has faced stiff resistance from anti-GMO lobby groups that have doggedly campaigned against adoption of the technology. 

From making wild claims about alleged disease-causing properties of GMOs to equating genetically modified crops to neo-colonialism, advocacy groups have persistently labeled GMOs as an unwanted and unnecessary imposition on African farming systems. Many of the arguments fronted by anti-GMO campaigners, however, fall short on scientific and rational grounds. Here are 10 such claims and how they bend the truth:

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

1. GMOs are not safe for human consumption

The charge that GMOs pose a risk to human health has been prominently used by anti-GMO brigades in many parts of the globe, and Africa has not been spared.

One of the earliest cases in which lobbyists spread this canard across the continent was in the early 2000s in southern Africa. At that time, that region faced widespread famine that saw some 14 million people suffer extreme food shortages.

In efforts to mitigate the crisis, various governments and organizations pledged humanitarian assistance in the form of food aid. But some non-governmental advocacy groups were vocal in their opposition, and some governments sided with and even some countries were resistant on grounds that some of the food donations contained GMOs. Yet their rejectionist stance was adopted by many politicians and even some countries

Such hard-line stances put millions at risk of starvation. In a statement issued in August 2002 the UN moved to quell concerns with an assurance that the food consignments containing GMOs were safe for human consumption:

Based on national information from a variety of sources and current scientific knowledge, FAO, WHO and WFP hold the view that the consumption of foods containing GMOs now being provided as food aid in southern Africa is not likely to present human health risk. Therefore, these foods may be eaten. The Organizations confirm that to date they are not aware of scientifically documented cases in which the consumption of these foods has had negative human health effects.

The biotechnology rejectionist movement sprung to life again in 2012 with the publication of a study by French scientist Prof Gilles-Eric Séralini’ that linked cancer in rats to the consumption of GM foods. It influenced the Kenyan government to ban the importation and production of GM imports and products in the country. Various scientific voices and regulatory bodies roundly discredited the study, leading to its retraction

image

But Kenya remained locked in for another decade in its opposition to GM technology. Finally, in October 2022, the Kenyan government lifted its 2012 ban on GMO imports and cultivation. The decision was informed by various expert and technical reports on the adoption of biotechnology, including reports from Kenya’s National Biosafety Authority (NBA).

Globally, Kenya is now aligned with the Food and Agriculture Organization (FAO), United States of America’s Food and Drug Administration (FDA), the European Food Safety Authority (EFSA) and the World Health Organization (WHO) in vouching for the safety of GMO foods, noting that they undergo rigorous safety assessments

GM foods currently available on the international market have passed safety assessments and are not likely to present risks for human health. In addition, no effects on human health have been shown as a result of the consumption of such foods by the general population in the countries where they have been approved.

2. GMOs come with increased pesticide use

Claiming adverse health complications has been a common complaint, In a recent incident, a Kenyan senator berated the government’s recent move to lift the decade-long ban on GMO cultivation and imports, saying that GMOs need more herbicides than conventional crops.

Do GM crops (necessarily) increase pesticide use? This is a polarizing question with no easy answer. “The problem with these sweeping statements is that “pesticides” is a broad category that includes herbicides (pesticides used to destroy weeds), insecticides (pesticides used to repel insects), and more,” said Eva Greenthal, a Senior Science Policy Associate at the Washington, DC-based Center for Science in the Public Interest (CSPI. There are other nuances as well:

[D]ifferent GE crops are engineered with different traits and are designed to interact with specific herbicides or insecticides in different ways. Crops engineered with herbicide tolerance allow farmers to spray those specific herbicides to kill the weeds around a plant, but enable the plant to survive. Crops engineered with insect resistance produce their own biological pesticides which are toxic to insects but (ideally) not to humans. The details matter. And so, a case-by-case consideration of each product is necessary.

In fact, planting genetically modified seeds in some cases been shown to reduce pesticide poisoning among farmers as many GM crops, especially those that are insect resistant, require far less pesticides than other conventional or organic farmers need to use — that according to a report released in September 2021 by the UK government’s Regulatory Horizons Council.

The report cites India as a case in point. A 50-to-70 percent reduction in pesticide applications on insect-resistant GM (Bt) cotton has been recorded, leading to significant health benefits.“It has been estimated that this GM crop helps to avoid several million cases of pesticide poisoning per year (in India),” the report asserts.

Closer to home, the adoption of Bt cotton significantly reduced pesticide application in Burkina Faso. Farmers went from spraying their conventional cotton fields 15 times per season to control bollworm to spraying only twice.

3. The claim that African countries lack regulatory capacity on GMOs

For some reason, anti-GMO activists find it fashionable to downplay the capacity of local regulatory authorities to effectively oversee technological innovations and put in place the necessary safeguards to ensure the safety of GMOs. They argue that national regulatory bodies are ill-equipped and lack the infrastructure to execute their mandates.

In Nigeria, activist Nnimmo Bassey claimed a “non-effective regulatory system” among numerous grounds for rejection of GMOs by various civil groups. Scientists vehemently disagree. Nigeria’s Director-general of the National Biosafety Management Agency (NBMA), Dr. Rufus Ebegba, says without hesitation that his country has the capacity to deploy safe biotechnology products for agricultural development and environmental safety. 

Ebegba explained that Nigeria has both the institutional capacity and policy framework to ensure the application of modern technology, especially on agricultural production, adding that with the establishment of the NBDA, Nigeria is equipped with the requisite knowledge to deploy GMOs.

Across the continent in East Africa, BIBA Kenya, an anti-biotechnology lobby group has filed a petition which includes an item alleging that the country’s National Biosafety Authority lacks the capacity to regulate GMOs.

The African Union Development Agency (AUDA-NEPAD) further assures that all biosafety regulatory bodies in African countries involved in biotechnology have scientific advisory committees comprised of highly qualified scientists trained in relevant areas of biotechnology and biosafety in renowned universities in Africa and overseas. 

“These committees review all biotech applications in their countries and make recommendations for the regulatory bodies to make informed decisions,” states AUDA-NEPAD. It also notes that numerous African countries have laboratories and equipment for safe production of GM products such as Bt seeds. 

4. Burkina Faso case with Bt cotton

Burkina Faso’s experience with Bt cotton has provided spectacular fodder for building a case against GMOs in the continent.

The country introduced Bt cotton farming in 2008 but abandoned the crop in 2015 due to concerns over the quality and fiber length of the lint, which fetched lower prices. The technical problems were real; developers had chosen to modify a variety not suited to widespread production. 

image

But the technology itself proved viable as volume soared, farmers’ incomes increased and pesticide pollution was dramatically cut, facts that even anti-GMO activists concede

The effects of the move to phase out Bt cotton were particularly felt by farmers, who had enjoyed the higher production margins and revenues realized under the GMO cotton cultivation.

“All farmers who have experience with Bt cotton are regretting the shift from Bt to conventional cotton… but they are helpless and hope that the government will listen to their plea,” said Francois Traore, president of the Union of Cotton Producers in Burkina Faso.

Experts note that there are other Bt cotton varieties that have excellent fiber quality that equals or exceeds conventional varieties, and the failure in Burkina Faso arose because the Bt seeds were not backcrossed enough before commercial release. 

“The Bt trait was not incorporated into the very best lines, notes AUDA-NEPAD. “The National Seed Company and Monsanto [now Bayer are aware of the issue and are working to fix this.

Countries like Kenya recognized that Burkina Faso’s failed attempt resulted from choices made in that country, and have adopted a Kenyan version of commercial Bt cotton with resounding success

5. GMOs are a form of neo-colonialism and imperial seed control

It is almost certain that in every engagement with anti-GMO activists in Africa, the charge that GMOs infringe on national and regional food sovereignty arises. GMOs, they assert, would impose a stranglehold on local food production systems and deprive local farmers of their long-held and cherished control of seeds. They would become hostage to ‘devious forces’ in the West, stripped of local control and forced to kowtow to demands dictated by corporate producers of GMO seeds. 

“Peasant farmers who have been banking and sharing seeds for centuries will now be forced to buy seeds every season, monocrop, and use expensive and potentially carcinogenic herbicides that come with the GMO package,” claims activist group Review of African Political Economy (ROAPE).

image
Credit: Flickr/Global Food Justice

That’s just not true. Key GM crops, such as Bt insect resistant varieties, actually cut down on the amount of pesticides used by both conventional and organic farmers. Moreover, countries freely choose to adopt or not adopt GMOs. And even in those African countries that have allowed GM production, farmers are never compelled to grow GM crops. It’s a free choice, and those who prefer to grow conventional varieties have the leeway to make the decision. 

6. GMOs have failed to improve food security in South Africa

GMO rejectionist groups such as Alliance for Food Sovereignty in Africa (AFSA) have juxtaposed South Africa’s food security situation against the country’s status as Africa’s pioneering nation in cultivating GMOs for food.

They cite an assortment of challenges facing the country —from incidences of stunting to obesity —and deploy these crises to disparage GMOs, saying they have failed to provide much-needed food security.

The charge disparages GMOs but misses the target by failing to demonstrate how GMOs are at all responsible for massive, societal-wide issues. While the country may be grappling with food insecurity, this cannot be linked with the tiny volume of GMO crops grown in South Africa. It does not negate the fact that GMOs have consistently produced more than conventional varieties.

The facts speak for themselves, and numbers don’t lie. Let’s take the case of GM maize production. A study titled “Economic and ecosystem impacts of GM maize in South Africa” published in the journal Science Direct documented food security benefits amounting to 4.6 million additional white maize rations annually attributable to GM Bt white maize. South Africa approved Bt maize in 2001–2002, making it the first GM subsistence crop producer in the world. Between 2001 and 2018, GM white maize adoption contributed 83.5 million additional rations of maize. It may be convenient to cite the country’s levels of stunting, but facts are stubborn.

Courtesy: BBC, Getty Images

7. GMOs are a way of dumping failed technology in Africa

It’s a rather confounding line, but it has been zealously proclaimed during several anti-GMO campaigns nevertheless. Activists assert that Africa is being used as an unwitting dumping round for sub-standard technologies that have been rejected in other places, and, by implication, in the developed world.

Curiously, the anti-GMO crusade hardly if ever tenders any evidence to back these claims of substandard crops, or provide evidence of the technology failing in another country. In one instance, protesters charged that Bt cowpea were “a known failed technology”. 

Yet this “failed technology” has been wildly successful in conferring an insect resistance trait that protects cowpea varieties from damage by the devastating Maruca vitrata infestation.

There is extensive documentation that farmers have been able to contain the insect pest Maruca pod borer which attacks cowpea plants and many legume crops in Africa. The damage from the pest affects the quantity of the leaves, flowers and the quantity and quality of grains leading to severe yield loss. Pesticides to control Maruca are expensive and are not always available, but the Bt technology has proven to work.

8. GMOs do not yield higher than conventional crops

As crazy as it sounds, this unsupported claim is occasionally thrown into the debate by GMO opponents despite overwhelming evidence to the contrary. The implication is that despite the efforts put into developing GM crops, their output more or less equals that of conventional crops and thus outweighs the overall benefits.

That’s just not true. Examples abound throughout the continent showing that GMOs have in fact increased yields and improved livelihoods across value chains, from the increased cotton yields thanks to Bt cotton in Kenya to the increased rations realized through GM white maize cultivation in South Africa, among other examples. A meta-study released in 2017 analyzing more than 6,000 peer-reviewed studies covering 21 years of data found that GMO corn increased yields up to 25 percent and dramatically decreased dangerous food contaminants. The study, published in Scientific Reports, analyzed field data from 1996, when the first GMO corn was planted, through 2016 in the United States, Europe, South America, Asia, Africa and Australia.

image
Worldwide distribution of the field studies included in the meta-analysis. Area of GMO corn cultivation by country in 2016 is indicated in the map.

The study also reaffirmed the scientific consensus that genetically modified corn does not pose risks to human health. And as adverse climate conditions are increasing, GMOs have been put forward as one of the ways to protect a sustainable food supply as conventional methods record lower and more erratic yields. Kenya’s ministerial cabinet noted it approved GMO crop cultivation because it is more climate change adaptable.

9. GMOs will harm Africa’s biodiversity

GMOs have been persistently linked to the risk of biodiversity loss in Africa. Anti-GMO crusaders have intimated that GMOs will surely lead to extinction of the region’s biodiversity, as Kenya’s former vice president Kalonzo Musyoka alleged in a recent statement decrying the lifting of the ban on GMO imports.

Studies have in fact associated biotech crops, which include transgenics, with greater biodiversity sustainability. One such study revealed that globally, GM crops helped conserve biodiversity in the period 1996-2014 by saving 152 million hectares of land.

As noted in a policy brief by AUDA-NEPAD, extensive scientific research has led to systematic protocols to measure the potential risks posed by GM crops to the environment. The AU further assures that measures are put in place to ensure that the use, handling and transfer of GMOs does not pose any threat to biodiversity.

10. GMOs are a danger to the environment

This claim takes many variants, but its most preferred version is the contention that GMOs will lead to propagation of invasive species. For example, Claire Nasike who is a Food For Life Campaigner at Greenpeace Africa in Kenya, has cautioned that GMOs could interfere with the country’s ecological balance, saying that GM crops “are likely to contaminate non-GM crops” through pollination which could lead to the loss of various indigenous varieties. 

It is helpful, however, to put the claimed contamination in context. An AUDA-NEPAD policy brief states that cultivation of GM crops that do not have wild relatives in Africa does not pose environmental threats associated with gene flow to wild species, concluding:

… the wild relatives of maize are not found in Africa; therefore, pollen flow from GM maize to wild relatives is not an issue in Africa.

It is also instructive that biosafety experts have continuously affirmed that the country has an adequate biosafety framework that ensures that GMOs are safe to the environment.

Globally, studies have shown the positive impact of GM crop cultivation on the environment, citing variables such as decreased the environmental impact associated with herbicide and insecticide use on these crops, as well as reduction of carbon emissions. 

Overall, claims alleging environmental damage by GMOs in Africa are largely speculative as there has been no documented serious environmental adversity attributable to transgenics recorded in the continent.

According to the International Service for the Acquisition of Agri-biotech Applications (ISAAA), GM crops are thoroughly evaluated for environmental effects before entering the marketplace, and the assessments is conducted by multiple stakeholders who include the developers of GM crops, regulatory bodies, and academic scientists.

The risk assessment addresses specific questions about unintentional effects such as: impact on non-target organisms in the environment, whether the modified crop might persist in the environment longer than usual or invade new habitats, as well as likelihood and consequences of a gene being transferred unintentionally from the modified crop to other species.

Dr. Joseph Maina is a Senior Lecturer in the Department of Earth and Environmental Sciences at Macquarie University. Joseph’s ultimate goals are to understand and predict the impacts of environmental variability and change on social and ecological systems at local and global scales to support spatial planning & management.

‘Dead first’: Why American men are men more likely than Canadians, Australians and Brits to die prematurely

image
Whether it’s stubbornness, an aversion to appearing weak or vulnerable, or other reasons, men go to the doctor far less than women do. While behavioral and cultural norms may have a lot to do with the care-seeking habits of American men, the fact remains that the United States is the only high-income country that does not ensure all its residents have access to affordable health care. Roughly 16 million U.S. men are without health insurance, and affordability is the reason that people most often cite for why they do not enroll in a health plan. Do income level and financial stress help explain why men do not get needed care and experience worse health outcomes?

Using data from the Commonwealth Fund’s 2020 International Health Policy Survey and the Organisation for Economic Co-operation and Development (OECD), we compare health care accessibility, affordability, and health status for adult men in 11 high-income countries. We also examine measures of income and income-related stress, where the data allow, to understand the role income insecurity might play in American men’s relatively low use of health care.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Highlights

  • Looking across the 11 high-income countries in our study, rates of avoidable deaths, chronic conditions, and mental health needs for U.S. men are among the highest in our analysis.
  • Men in the U.S. have the lowest rate of prostate cancer–related deaths.
  • Men in Canada, the United States, and Sweden are the least likely to have a regular doctor and have among the highest rates of emergency department use for conditions that could have been treated in a doctor’s office.
  • Men in the U.S. and Switzerland skip needed care because of costs and incur medical bills at the highest rates.
  • In the U.S., men with lower income or frequent financial stress are less likely to get preventive care, more likely to have problems affording their care, and more likely to have physical and mental health conditions.

Findings

screenshot pm

Men in the U.S. were significantly less likely to rate the health care system “good” or “very good” than men in other countries. Most men in Switzerland, Norway, New Zealand, and Germany gave high ratings to the national health care system.

The Commonwealth Fund survey asked adults in the 11 countries what their income is and how often in the past 12 months they felt worry or stress about having a stable job or source of income. Men in the U.S. who reported above-average income viewed the health care system more favorably than those with average or below-average incomes, though these ratings were still lower than those in peer nations. While these findings show that income level is a factor in how likely someone is to view the health system favorably, other factors, such as the affordability of health care, may also contribute to people’s views.3 The health system ratings of men in the U.S. who rarely experienced income-related stress were not statistically different from those of men who frequently experienced such stress.

Health status and outcomes

screenshot pm

High rates of avoidable deaths, or deaths before age 75 which occur from conditions that can be prevented or treated, often indicate shortcomings in public health and care delivery systems. The United States had the highest rate of deaths from avoidable causes among men, and over 100 deaths more than the nation with the second-highest rate, the United Kingdom.

Swiss men are less than half as likely as American men to die from an avoidable cause.

screenshot pm

People are more likely to suffer from serious health conditions when they do not get regular screenings and other preventive services, or when they skip or delay doctor’s appointments for management and treatment of chronic conditions like cancer, diabetes, and hypertension, or mental health issues like depression and anxiety. Men in the U.S. were the most likely to report having two or more chronic conditions, with those in Australia close behind.

Lifestyle and income play a key role in health outcomes. Research has found that adults with lower income have significantly higher rates of smoking and alcohol use — behaviors that directly contribute to higher rates of obesity, diabetes, and heart disease, among other conditions. In the U.S., lower income was associated with an increased likelihood of having multiple chronic conditions as well as a four times greater likelihood of being in fair or poor health.

Countries where men have lower rates of chronic conditions, such as Norway, also have lower rates of alcohol use and smoking.

screenshot pm

Mental health care needs were highest among men in Australia and the U.S. These needs related to a physician’s diagnosis of depression, anxiety, or other mental health conditions; stress, anxiety, or great sadness since the COVID-19 pandemic began; or a desire to talk to a mental health professional within the past 12 months.

Again, there were significant differences linked to income and stress level. In the U.S., men with income insecurity were more likely to have a mental health care need.

screenshot pm

More than a third of American men reported having hypertension (high blood pressure) — a treatable and, in most cases, preventable condition. That is the highest rate among the countries surveyed. In the U.S., men with average or below-average income were more likely to say they have hypertension than those with higher income.

Surprisingly, experience of financial stress was not significantly associated with rates of hypertension.

screenshot pm

While American men are more likely to lack easy access to regular primary and preventive care, cancer care is one area where U.S. health outcomes are comparatively better than other high-income countries. In 2019, men in the U.S. had the lowest rate of prostate cancer–related deaths among the 11 countries in our study.

The relatively lower U.S. prostate cancer death rate likely reflects the quality of cancer care in the U.S., which features extensive screening as well as a variety of advanced treatments and technologies. U.S. cancer death rates have dropped significantly over the past three decades, largely a result because of better screening for the disease.

Access to care

screenshot pm

A physician’s office, clinic, or other regular place of care is an important point of health system contact for people — a key to getting timely and appropriate preventive services and treatment. Irregular contact with doctors means forgoing care for potentially life-threatening conditions.

While the majority of men in each country surveyed reported having a doctor or place of care, men in Sweden, the U.S., and Canada were the least likely to report having one. All men in Norway, where enrollment in health insurance is automatic and the health system incentivizes all residents to have a general practitioner, said they have a regular doctor or place of care. Men in the U.S. who reported frequent income-related stress were less likely to have a regular doctor than those who did not report this.

screenshot pm

Overuse and improper use of emergency departments has been a concern in the U.S. for many years. These facilities often function at high capacity or overcapacity; the care is also expensive relative to many other healthcare settings. Among men who either did not have a regular doctor or did not use their regular doctor for routine care, those in Canada reported the highest usage of emergency departments, followed by men in the U.K. and the U.S. American men reporting income insecurity were more likely to visit emergency departments for non-emergency care.

Affordability of care

screenshot pm

Men in the U.S. were significantly more likely to report spending $2,000 or more out of pocket on health care compared to men in the 10 other high-income countries, with the exception of Switzerland.

Higher-income men in the U.S. were more likely to have high out-of-pocket costs than lower-income men, which may be a consequence of using more health care services.

screenshot pm

The Commonwealth Fund survey asked men about times when the cost of treatment prevented them from getting health care in the past year, including when they had a medical problem but did not visit a doctor; skipped a needed test, treatment, or follow-up visit; did not fill a prescription for medicine; or skipped medication doses. High costs associated with medical care may be a factor in why men skip needed treatment.

The findings show that men in the U.S., particularly those with income insecurity, are significantly more likely to skip or delay needed care because of costs than men in the other countries studied.

screenshot pm

Men in the 11 countries were asked whether they’d had at least one medical bill problem in the past year, whether it was serious difficulty paying for care or being unable to pay a medical bill, spending a lot of time on paperwork or disputes related to medical bills, or having their insurer deny payment or pay less than expected for a claim.

Men in the U.S. were the most likely to report having at least one of these medical bill problems, with nearly half reporting at least one problem. By comparison, only 7 percent of men in the U.K. said they had a medical bill problem.

Men in the U.S. who had frequent income-related stress were significantly more likely to report having a medical bill problem compared to those who rarely experienced this kind of stress. There was no significant difference by income level. This may reflect the fact that lower-income men are less likely to use health care and therefore less likely to incur health costs and medical bills, while higher-income men are at risk of incurring medical debt when they seek care because of poor insurance protection.

Conclusion

Our analysis dramatizes the failings of the U.S. health care system with respect to men and complements our recent analysis of health and health care for women of reproductive age. While doing relatively well in prostate cancer care and treatment, the United States compares poorly to most other high-income nations when it comes to receipt of preventive care and affordability of care. And on nearly every health care measure we studied, men in the U.S. with income insecurity fared the worst. As a result, American men, particularly those with lower incomes and financial stress, have the poorest health outcomes.

Expanding access to affordable, comprehensive health coverage would be a first step toward reducing these disparities. Improvements also will require the combined efforts of physicians, health systems, insurers, and communities to promote preventive care and healthy behaviors, including through targeted education and outreach. For their part, American men must become more proactive about their health, and that should include establishing regular connections with a health care provider. After all, “going it alone” is rarely the path to well-being.

Reginald D. Williams II is the vice president of International Health Policy and Practice Innovations at The Commonwealth Fund.

Evan D. Gumas is a research associate in International Health Policy and Practice Innovations at The Commonwealth Fund.

Murina Z. Gunja is a senior researcher in International Program in Health Policy and Practice Innovations at The Commonwealth Fund.

A version of this article was posted at The Commonwealth Fund and is used here with permission. Check out The Common Wealth on Twitter @commonwealthfnd

Viewpoint: ‘Science doesn’t work through ad hominem attacks’ — UC Davis’ Alison Van Eenennaam challenges NY Times’ unsupported exposé of fellow scientist researching ways to reduce carbon footprint of cattle industry

Look I get it. The New York Times (NYT) does not like GMOs, industrial agriculture, factory farming or meat consumption. But I question the decision of such an influential media source to feature TWO front page articles detailing agriculture industry funding of agricultural scientists, Prof. Kevin Folta in 2015 and in 2022 Prof. Frank Mitloehner, who work doing public outreach in these fields. With the implication that they are “on the take” and promoting misinformation as a result of this funding. But what the NYT failed to show in these stories was that either of these public sector scientists, whose reputations the NYT has forever brought into question, ever made statements that were unsupported by peer-reviewed literature. There is a term for when someone attacks the character, motive, or some other attribute of the person making an argument, rather than the argument itself; it is called an “ad hominem” attack.

The most recent article featured one of my departmental colleagues at UC Davis, Professor Frank Mitloehner. The title of the article “He’s an Outspoken Defender of Meat. Industry Funds His Research, Files Show” was a little confusing, as it seemed to be suggesting the source of Professor Mitloehner’ s research funding was what was concerning, but in fact most of the article was about communication and outreach done by the Clarity and Leadership for Environmental Awareness and Research (CLEAR) Center at UC Davis, which receives almost all its funding from industry donations. Something that Professor Mitloehner has been very open about as detailed in his blog post, and on the CLEAR website.

Prof. Mitloehner provides his response to the NYT article and accompanying Greenpeace hit piece here. The irony of Greenpeace, which itself has shamelessly ignored the scientific consensus on the safety of GMOs and has engaged in ‘Tobacco-style’ PR on this topic for 30 years, accusing  Prof. Mitloehner of being  “a sock puppet” is projection at its finest.

The evidence that the NYT article provides to suggest that these communications are slanted is a statement by Matthew Hayek, an assistant professor in environmental studies at New York University. He states “Almost everything that I’ve seen from Prof. Mitloehner’s communications has downplayed every impact of livestock,” he said. “His communications are discordant from the scientific consensus, and the evidence that he has brought to bear against that consensus has not been, in my eyes, sufficient to challenge it.”

I am not sure exactly which specific scientific consensus Prof. Mitloehner’s communications are discordant from, but in presentations I have not heard Dr. Mitloehner make a statement that was not supported by peer-reviewed papers. I have heard him clearly state on multiple occasions that livestock are responsible for 14.5% of global emissions, and that cows and other ruminants account for 4% of US greenhouse gases (GHG) in concordance with the scientific consensus.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

I am no GHG expert, but what really seems to be at the heart of people’s “beef” with Prof. Mitloehner’s communications is whether methane (CH4), as a short lived GHG, should be treated differently to CO2 which is a long-lived GHG in predictions of global warming impacts. This particularly impacts ruminants whose rumen-dwelling methanogenic bacteria produce (CH4) when breaking down otherwise indigestible cellulose. This is discussed on the CLEAR site.

This is actually NOT a settled science.  The commonly utilized ‘carbon footprint’ impact assessment makes use of the GWP100 metric, i.e., the global warming potential over a 100-year time horizon, while standardizing the atmospheric effects of all GHG to CO2-equivalents (CO 2-eq). It is typically claimed under GWP100 that methane (CH4) is, as a greenhouse gas, 28 times more potent than CO2. GWP100 became the standard metric more than 30 years ago when it was selected for in the Kyoto protocol. It has had a history of critiques in relation to characterizing the climate impacts of CH4 (Pierrehumbert, 2014). The authors of the Intergovernmental Panel on Climate Change Fifth Assessment Report themselves state that the GWP100 climate metric should not be considered to have any special significance.

gases by source caption
Credit: EPA

A newer metric, GWP*, was developed by researchers at the Environmental Change Institute, School of Geography and the Environment, University of Oxford (Allen et al., 2018Cain et al., 2019). These scientists argue that this metric more aptly represent how CHemissions translate into temperature outcomes at various points in time, by treating CH4 as a flow gas rather than a stock gas like CO2. This metric was not developed by the livestock industry. The New York Times article includes a quote “the use of that method [GWP*] by an industry “as a way of justifying high current emissions is very inappropriate.”

However,  this metric is not necessarily “livestock friendly” under all conditions (Costa et al., 2021). In fact if livestock populations are increasing, the global warming impacts when using GWP* are higher than using the GWP100 metric. And Professor Mitloehner is not alone in suggesting that “expressing CH4 emissions as ‘CO2-equivalent’ emissions based on the GWP100 could misdirect attention from the need to reduce global net COemissions to zero as quickly as possible” (Reisinger et al., 2021). Some have further argued that avoiding animal-sourced foods based on the GWP100 metric may result in trading a short-term climate benefit from reducing short-lived CH4 emissions, with a longer-term problem of increased CO2 and N2O emissions, making climate stabilization even more difficult.

Even the most recent IPCC technical report (2021; doi:10.1017/9781009157896.002) includes discussion of the GWP* metric

New emissions metric approaches, such as GWP* and Combined GTP (CGTP), relate changes in the emissions rate of short-lived greenhouse gases to equivalent cumulative emissions of CO2 (CO2-e). Global surface temperature response from aggregated emissions of short-lived greenhouse gases over time is determined by multiplying these cumulative CO2-e by TCRE (see Section TS.3.2.1). When GHGs are aggregated using standard metrics such as GWP or GTP, cumulative CO2-e emissions are not necessarily proportional to future global surface temperature outcomes (high confidence).

The best way to do environmental assessments of the livestock sector is not a settled science. All-too frequently assessments are stated in simplistic terms, making use of a myopic selection of metrics, and overlooking underlying heterogeneity and complexities. If the New York Times has problems with the way that Prof. Mitloehner and the CLEAR center communicate these topics then they should provide their science-based arguments as to why the approach he is using is incorrect. Not malign his reputation by implying he is putting out information that is “discordant from the scientific consensus” with no further elaboration as to what exactly he said that was incorrect, or which specific consensus his information is discordant from. Nor discredit his work solely because of the industry funding he clearly discloses. Because that is how science works, not through “ad hominem” attacks.

Alison Van Eenennaam is an Extension Specialist in Animal Biotechnology and Genomics, Department of Animal Science, University of California, Davis. Follow her on Twitter @biobeef 

A version of this article was originally posted at the UC Davis BioBeef blog and is reposted here with permission. 

Podcast: Pollution makes you fat? India approves more GMOs; Biological ‘push notifications’

Air pollution harms our health in many ways; does it also encourage obesity? Farmers in India have access to two newly approved GM crops. Are regulators in the country finally beginning to embrace biotechnology? Wearable devices that track your blood sugar, sleep habits and macronutrient intake are being marketed as a solution to obesity. Will they live up to the hype?

Join geneticist Kevin Folta and GLP contributor Cameron English on episode 193 of Science Facts and Fallacies as they break down these latest news stories:

Air pollution is nasty stuff. It’s been linked to all sorts of deadly diseases, and research in recent years has suggested that exposure to air pollutants might amplify the risk of obesity. Do these studies stand up to scrutiny, or is the link between weight gain and smog a media-created health scare?

After two decades of foot dragging, regulators in India have approved two more genetically engineered crops, herbicide-tolerant cotton and mustard. Many farmers have been growing the enhanced cotton illegally for several years now because it helps them efficiently control weeds and therefore preserve their crop yields. Has the Indian government finally discarded its unnecessary fear of “GMOs”?

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Will ‘biological push notifications’ from weight loss gadgets keep people on track with their diets?

Type 1 diabetics have worn continuous glucose monitors for many years to track their blood sugar. The same technology is now being deployed to help more consumers lose unwanted excess weight. These devices, which can be worn as rings or even embedded in workout clothing, track respiratory rate, blood oxygen, and other health metrics; one device even analyzes breath to determine which macronutrients the user is burning. But with little research to validate these “wearable technologies,” some experts wonder how effectively they’ll help people lead healthier lives.

Kevin M. Folta is a professor, keynote speaker and podcast host. Follow Professor Folta on Twitter @kevinfolta

Cameron J. English is the director of bio-sciences at the American Council on Science and Health. Visit his website and follow ACSH on Twitter @ACSHorg

The increasingly bushy human family tree and five other paradigm-altering changes in our understanding of human evolution

Neanderthals and human evolution
Scientific study of human evolution historically reassured us of a comforting order to things. It has painted humans as as cleverer, more intellectual and caring than our ancestral predecessors.

From archaeological reconstructions of Neanderthals as stooped, hairy and brutish, to “cavemen” movies, our ancient ancestors got a bad press.

Over the last five years discoveries have upended this unbalanced view. In my recent book, Hidden Depths: The Origins of Human Connection, I argue that this matters for how we see ourselves today and so how we imagine our futures, as much as for our understanding of our past.

Six revelations stand out.

1. There are more human species than we ever imagined

The 21 known species of human including Neanderthal
Credit: S.V. Medaris

Species such as Homo Longi have only been identified as recently as 2018. There are now 21 known species of human.

In the last few years we have realised that our Homo sapiens ancestors may have met as many as eight of these different types of human, from robust and stocky species including Neanderthals and their close relatives Denisovans, to the short (less than 5ft tall) and small-brained humans such as Homo naledi.

But Homo sapiens weren’t the inevitable evolutionary destination. Nor do they fit into any simple linear progression or ladder of progressHomo naledi‘s brain may have been smaller than that of a chimpanzee but there is evidence they were culturally complex and mourned their dead.

Neanderthals created symbolic art but they weren’t the same as us. Neanderthals had many different biological adaptations, which may have included hibernation.

2. Hybrid humans are part of our history

Hybrid species of human, once seen by experts as science fiction, may have played a key role in our evolution. Evidence of the importance of hybrids comes from genetics. The trail is not only in the DNA of our own species (which often includes important genes inherited from Neanderthals) but also skeletons of hybrids.

One example is “Denny,” a girl with a Neanderthal mother and Denisovan father. Her bones were found in a cave in Siberia.

Neanderthal girl Denny
Credit: John Bavaro

3. We got lucky

Our evolutionary past is messier than scientists used to think. Have you ever been troubled with backache? Or stared jealously after your dog as it lolloped across an uneven landscape?

That should have been enough to show you we are far from perfectly adapted. We have known for some time that evolution cobbles together solutions in response to an ecosystem which may already have changed. However, many of the changes in our human evolutionary lineage maybe the result of chance.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

For example, where isolated populations have a characteristic, such as some aspect of their appearance, which doesn’t make much difference to their survival and this form continues to change in descendants. Features of Neanderthals’ faces (such as their pronounced brows) or body (including large rib cages) might have resulted simply from genetic drift.

Epigenetics, which is where genes are only activated in specific environments, complicate things too. Genes might predispose someone to depression or schizophrenia for example. Yet they may only develop the condition if triggered by things that happen to them.

4. Our fate is intertwined with nature

We may like to imagine ourselves as masters of the environment. But it is increasingly clear ecological changes moulded us.

The origins of our own species coincided with major shifts in climate as we became more distinct from other species at these points in time. All other species of human seem to have died out as a result of climate change.

Three major human species Homo erectus, Homo heidelbergensis, and Homo neanderthalensis died out with major shifts in climate such as the Adams event. This was a temporary breakdown of Earth’s magnetic field 42,000 years ago, which coincided with the extinction of the Neanderthals.

5. Kindness is an evolutionary advantage

Research has uncovered new reasons to feel hopeful about future human societies. Scientists used to believe the violent parts of human nature gave us a leg up the evolution ladder.

But evidence has emerged of the caring side of human nature and its contribution to our success. Ancient skeletons show remarkable signs of survival from illness and injuries, which would have been difficult if not impossible without help.

The trail of human compassion extends back one and a half million years ago. Scientist have traced medical knowledge to at least the time of the Neanderthals.

Altruism has many important survival benefits. It enabled older community members to pass on important knowledge. And medical care kept skilled hunters alive.

6. We’re a sensitive species

Evolution made us more emotionally exposed than we like to imagine. Like domestic dogs, with whom we share many genetic adaptations, such as greater tolerance for outsiders, and sensitivity to social cues, human hypersociability has come with a price: emotional vulnerabilities.

We are more sensitive to how people around us feel, and more vulnerable to social influences, we’re more prone to emotional disorders, to loneliness and to depression than our predecessors. Our complex feelings may not always be pleasant to live with, but they are part of key transformations which created large, connected communities. Our emotions are essential to human collaborations.

This is a far less reassuring view of our place in the world than the one we had even five years ago. But seeing ourselves as selfish, rational and entitled to a privileged place in nature hasn’t worked out well. Just read the latest reports about the state of our planet.

If we accept that humans are not a pinnacle of progress, then we cannot just wait for things to turn out right. Our past suggests that our future won’t get better unless we do something about it.

Penny Spikins is a Professor of the Archaeology of Human Origins at the University of York.

A version of this article appeared originally at The Conversation and is posted here with permission. Check out The Conversation on Twitter @ConversationUS

glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists