Viewpoint: Innovation vs. extreme precaution — What should drive science regulation and policy in Europe?

Credit: Thomas Quine via CC BY 2.0
Credit: Thomas Quine via CC BY 2.0
People like me often claim we need science-based policy. Regulations have to follow the best available evidence and European agencies have been established to provide such information to policymakers. But do we then mean that scientific evidence is the only evidence that policymakers should consider? Are there other types of evidence that can equally be valuable in the decision-making process? And how should it be weighted?

When two pillars of the Brussels public affairs establishment write a paper on EU better regulation and evidence-based policy for the College of Europe, anyone involved in the European policy arena should pay attention. When their paper challenges many assumptions in the EU policy process, only fools would not pay attention. As the provocative theme attracted me, I took the opportunity of writing this review and hopefully building on the dialogue their paper has started.

Vicky Marissen and Daniel Guéguen

It is no understatement to say that Daniel Guéguen and Vicky Marissen are icons in the Brussels policy arena. Daniel dominated the agricultural debates in the evolutionary years of the European Union’s ag-tech policies, heading two key ag trade associations in the 80s and 90s. Vicky has blended academic excellence with over twenty years of lobbying experience. She has become a stalwart in EU policy training courses and events moderation. Years ago, when I used to teach courses on EU Lobbying, I used one of Daniel’s 15 books as my course textbook. So I took great interest in reading their research paperScience-based and evidence-based policy-making in the European Union: coexisting or conflicting concepts?

The paper puts forward a challenge: Is science-based policy synonymous with evidence-based policy, and if not, what does this mean for the European regulatory process? Are they compatible? We want evidence-based policy, but most in the risk world assume that this means that policies should rely on the best available scientific research and data. But what if there are different types of evidence of value beyond scientific evidence? How are they, and how should they be, weighted in the EU decision-making process? How can we ensure that EU policies use the best evidence and the best science?

The authors based their paper on research from several case studies, interviews with key EU officials and experts and over 60 years of their combined experience at the heart of the European policy arena.

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

Defining science and evidence

The two terms, science and evidence, are themselves fluid. Guéguen and Marissen start their paper with several definitions. While science can be defined from its method, it also relies on evidence which can suffer from bias in how it’s collected or interpreted. Evidence then (its quality, source and how it is gathered) should be our key focus. As seen with the European Commission’s development of their Green Deal strategy, the political process uses these variations in degrees of evidence to expand their interests and influence.

Within this context, the paper considers how regulatory risk assessments are done by the European agencies. The agencies are not actually “doing science”, but rather assessments of scientific research and hence somewhere between science and governance. They are done to answer a question (preferably in a “Yes” or “No” short form) rather than to validate a theory or engage in a process of discovery.

Is there room for ‘maybe’ in interpreting scientific analysis to policy?Credit: Mohamed Hassan via CC

I would go a step further and question whether there are many differing fields of science fed into the risk assessments. Chemists and biologists take different approaches toward determining and valuing toxicity exposures. A chemist will prioritise dose levels along Paracelsian lines while biologists consider more complex relationships. During the COVID-19 pandemic, it was clear virologists and epidemiologists were not singing from the same song-sheet on how strict the lockdowns should be. These differences of approach led to different advice on how precautionary our policy responses should be.

Forms of evidence

Guéguen and Marissen work to establish how policy decisions often bring in different forms (types) of evidence. This was a bias wake-up call for me as my world, centred on risk communications, focuses on determining the best available scientific evidence (based on strong research, well-replicated data and rationally founded paradigms). So I would tend to ignore other types of evidence that my echo-chamber would consider “less rational”. Some would call that scientism; I was trained to consider it as common sense.

Source: Daniel Guéguen and Vicky Marissen

What forms of evidence are there? The paper quotes the EU’s 2002 General Food Law, that evidence is also based on: “societal, economic, traditional, ethical and environmental factors”. The authors, rightly, do not pit scientific evidence against other forms and all must be considered within regulatory impact assessments. But we do need to be aware when our decision-making process may be relying on other, non-scientific elements.

In the mid-2000s, I was the rapporteur for a EURAB policy advice report that looked at different “cognitive frameworks” or forms of knowledge (similar to what Guéguen and Marissen refer to as types/forms of evidence). The EURAB paper discussed how patients groups could bring different experience-based contributions to the medical research process, how farmers could “know” things in a practical manner that agronomists did not or how midwives gathered knowledge through practice. Two decades later, what I had referred to as forms of knowledge can easily fit into what this paper refers to as forms of evidence.

The authors indicate the challenges that science-based policy advice has and admit that perhaps the ambitions are too high.

… when we speak of ‘science’ in an EU context, the discussion is really about how the results of scientific research are interpreted and applied by the various actors involved in policy-making, and by society at large. A key challenge, therefore, is to ensure that science is used as objectively as possible, rather than dominated by the vagaries of public opinion, ideological conflict, and other subjective considerations. Another challenge is how to reconcile scientific evidence with other forms of evidence, and with the various political, societal and other non-scientific factors that inform the process of governance.

In other words, we need to manage scientific evidence with other forms of evidence and other influential factors within the decision-making process. This opens up a whole range of questions. What factors would be sufficient to allow other types of evidence to have value in the decision-making process?

We have just gone through a generation-defining experience with the COVID-19 coronavirus pandemic. This is a good example of how political, societal and other non-scientific factors informed the process of governance. In the early days of the pandemic, when scientific evidence was not validated and past experiences inappropriate, public discourse was raw, intense and value-based. Governments, no longer versed in basic risk management skills, quickly nominated scientists to show up at press conferences to share available data and best advice. These individuals quickly became media stars, both respected and reviled at the same level (depending on which form of evidence was considered a priority).
When COVID-19 infection rates were challenging medical infrastructure capacity, other factors (economic, social, moral and political) started to weigh on the decision-making process. Sweden, for example, was criticised for not following the “conventional wisdom” that lockdowns were the only means to control the virus (but there were Swedish legal restrictions that limited such decisions). Mask-wearing in the US became a politically-based decision as basic individual freedoms were pitted against collective values. The views of epidemiologists were ignored by those unwilling, morally, to accept that some vulnerable populations would suffer more. As mentioned above, it did not help that, during this crisis, different types of scientists (like virologists and epidemiologists) had different advice for the policymakers to choose from based on different forms of evidence. The COVID-19 policy soup was more like a ratatouille as different actors cherry-picked which evidence aligned with their values.

A German cafe during a COVID lockdown. Credit: Herzi Pinki via CC

Failure to respect best evidence

The paper considers several case studies, from the European Commission’s ignoring of its research agencies’ advice on glyphosate to attempts to regulate GMOs to the present political pronouncements underlying the EU’s Green Deal strategy.

The glyphosate and GMO cases exposed significant flaws in the EU risk management process. Despite clear expert opinions from EFSA and ECHA, the paper notes how the Commission, European Parliament and Member State governments undermined the authority of these scientific agencies by allowing equal (if not more) weight to be given to public opinion throughout the process. One wonders if these governments were simply pandering to the loud, green protests and how they determined the weight of public opinion (compared to the voices of farmers and consumers).

And this raises a good question. If a pesticide or a GMO provides sufficient scientific evidence to meet a regulatory risk assessment’s requirements but there is a general public aversion towards GMOs and pesticides in general (and a distrust of the corporations who make them), does that then constitute sufficient social/political/ethical evidence to ban the substance or product? What about economic or competition factors? What about the ethical implications from stressing food security levels in poorer, food-import-dependent countries because of Europe’s high value of “naturally-produced” food?

Guéguen and Marissen, in comparing the innovation principle with the precautionary principle, feel that regulators need to do a better job communicating the benefits of such technologies to their populations. Today I would question which political leaders would have the spine or the courage for such a task.

The problem though goes deeper. Prior to the European Commission’s 2001 White Paper on Governance, the “men in white coats” making the “backroom decisions” would not have cared about what the public “felt” compared to the best available scientific evidence. But with the shift in 2001 toward a more participatory, public engagement approach, other forms of evidence have been given value in the decision-making process (regardless of how these loud voices were coming from a small part of “the public”). The paper notes in the last few decades how there has been a steady trend away from the risk management approach and toward the precautionary, hazard-based approach.

Science advice systems

The paper surveys the different types of European scientific agencies and advice mechanisms. It is a good overview of where which scientific evidence is managed and sets up for the argument that there is some overlap that could be problematic.

Or you could just ask a trusted expert

One element the paper, however, does not acknowledge is the role of the short-lived position of the EU Chief Scientific Adviser, before the post was dissolved due to a coordinated activist NGO campaign. Anne Glover (see cover photo) would remind people they were entitled to their own opinions on, say, GMOs, but not their own facts (and for this, she suffered an unfair sustained attack). The authors did look at how the SAM (Scientific Advice Mechanism) has performed with advice on how to expand its role.

But the SAM was a compromise to justify removing the presence of a scientist in the room to advise the president of the European Commission and the cabinet on complex technological matters. As a board to be called upon in an ad hoc manner (with delays for their deliberations to be published), the SAM is hardly a viable alternative to the vital role Anne Glover had provided during her posting in Brussels. Clearly the scientifically illiterate pronouncements underlying the Green Deal policies indicate that the SAM was a poor compromise with limited effect.

Guéguen and Marissen were critical of how the European Commission conducts its consultations and impact assessments (to draw on other forms of evidence to be implemented into the policy process). The authors call on the Commission services and Secretariat-General to better scrutinise how the European Commission conducts its impact assessments. Indeed, returning to the Green Deal debacle, there have been numerous consultations on the Farm2Fork strategy and throughout the process, not one element of the European Commission’s aspirational targets has been reconsidered or amended. A chief scientific adviser would have brought that issue to the table.

Technology companies certainly have their lobbying power. Would a science-lobby be more objective? Credit: Jack Taylor

Conclusions and recommendations

The paper concludes with several recommendations to improve the role of evidence in the EU policy process.

  • A White Paper on Science and European legislation
    The authors consider this as a starting point which could bring some clarity on how to simplify and harmonise the rules. I would not downplay the impact this could have. For the moment there is no clear guidance on how policy, namely precaution, should be used on complex issues (hence it is abused). I cannot emphasise the need for this enough. For years I have been calling for an EU White Paper on Risk Management in order to articulate the process for the use of precaution.
  • The creation of an administrative code
    With the siloed structure and the ad hoc approach to different policy issues, there is no clear code of practice for how evidence should be considered. While I agree that there needs to be some fixed procedural rules for how something like the precautionary principle or the hazard-based approach should be used, the complexity of many issues across a diversity of Member States would make this a derogational minefield.
  • Reinforcing the role of the Scientific Advice Mechanism (SAM) and the Joint Research Centre (JRC)
    The Commission has sufficient access to good expertise and evidence but they need to respect it. The JRC does already act as an arbiter, often being quite critical of the level of scientific evidence behind certain Commission policies (like Farm2Fork or their ban on neonicotinoids). One Commission official admitted to me, on neonics, that they had asked the JRC for a study (and not a report); the advantage of a study is that if the conclusions are unfavourable, the JRC could just be told to go back and study it some more. For myself, I have lost faith in how public authorities use scientific evidence and the Commission’s Scientific Advice Mechanism is a good case of creating a complex organisation where problematic issues can go to gather dust rather than using the best available evidence to make politically hard decisions. The Chief Scientific Adviser to the President of the European Commission was much more efficient.
  • Harmonising the functioning of ECHA, EMA and EFSA
    The authors add that the Commission needs to respect, follow and communicate the conclusions of their agencies. There is clearly an overlap of functions but harmonising the agencies would be quite complicated. Each agency or authority has evolved out of events and regulatory processes and their different approaches reflect that. Maybe that is a clear case for the need for a root and branch reform of the scientific agencies, but looking back at the histories, it would take a brave leader to open that box. Once again, I feel that reforming the process from within the European institutions will bear little fruit.
  • Better Regulation: improving impact assessments
    This is important but as Guéguen and Marissen admit, there is enormous work that needs to be done. When I was involved in the REACH regulatory process, impact assessments were coming out on a weekly basis, often using different methodologies to reach different conclusions. A few years later, the Sustainable Use Directive rapporteur ignored any impact assessments. I feel an impact assessment should be a tool to open up policy discourse and not used as post-policy legitimation.

I agree whole-heartedly with the authors of this paper that there needs to be a pro-science culture in the European Commission but, as they also admit, it seems to be going in the other direction. Since Churchill, the view of policymakers is that “scientists should be kept on tap, but not on top”, so I don’t think reforming the agencies and panels already in place will create that culture. Taps only work when you (want to) use them.

Scientific evidence needs to be central to policy decisions and there needs to be an organisation or strong public figure prepared to stand up and ensure that facts are not being ignored in the process. Can this leadership come from within the institutions? EFSA director, Bernhard Url, has long been the strongest internal voice for science (recall his “Facebook science” speech), but what happens if the Commission chooses to ignore his advice or force him to comply? Watching Url struggle to explain in the EP how the corrupted Bee Guidance Document was imposed on EFSA was a painful experience.

There is clearly a hole in the Brussels policy process that needs to be filled. While Guéguen and Marissen’s paper was being researched I had submitted an article with my own idea for the need for some sort of science watchdog organisation to ensure that scientific evidence stays in the forefront of policy discussions. I argue that it needs to be an external body that is not afraid to get involved in the messy, noisy process (and I have so far had some interesting feedback). I am presently talking with interested parties and will hold a workshop in the spring to consider the best approach so I deeply welcomed this paper as a key step forward.

Their paper should be considered as an outline for a book. I found myself, in my head, building on each point, arguing that some claims did not go far enough and questioning the direction the European Commission has gone in the last two decades. Guéguen and Marissen have provoked deep thinking (I had to edit out four paragraphs where my thoughts wandered – those paragraphs are now the basis for a further Risk-Monger article). They have done an excellent job in highlighting the serious issues that need to be addressed on science, evidence and EU policymaking and is an excellent step in a long-needed debate that should have a profound influence on the future European policy process. And for that I commend the authors.

David Zaruk has been an EU risk and science communications specialist since 2000, active in EU policy events from REACH and SCALE to the Pesticides Directive, from Science in Society questions to the use of the Precautionary Principle. Follow him on Twitter @zaruk

A version of this article was originally posted at Risk Monger’s website and has been reposted here with permission.

{{ reviewsTotal }}{{ options.labels.singularReviewCountLabel }}
{{ reviewsTotal }}{{ options.labels.pluralReviewCountLabel }}
{{ options.labels.newReviewButton }}
{{ userData.canReview.message }}
screenshot at  pm

Are pesticide residues on food something to worry about?

In 1962, Rachel Carson’s Silent Spring drew attention to pesticides and their possible dangers to humans, birds, mammals and the ...
glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists
glp menu logo outlined

Get news on human & agricultural genetics and biotechnology delivered to your inbox.