Noticeably absent in these groups is the voice of the people. This reflects the common intuition that we’ve inherited from the Age of Enlightenment (and its counterpart the Scientific Revolution) which is to sequester the immense task of developing science policy to the scientists, to locate authority in the individuals with certain backgrounds, experience and training. But how does excluding the people service the stated goals “transparency” and “trust”? Could it actually run counter to these stated missions?
In this article, I argue that trustworthiness and transparency are in fact necessary goals of governance, but the proper amount of them is best cultivated through participation, not through some passive reception of information from elite groups or institutions of expertise. While transparency and trustworthiness are admirable goals, they exist in a sort of duality: We both want people to actively participate in changing government and its system to service their needs while also wanting them to sit back and let experts do the work of ruling for them.
Contrary to what many may fear, this process of participation allows for the proper balancing of engagement with passivity by allowing individuals in to witness the workings of governance while also airing grievances that may otherwise evolve unchecked.
Following He Jiankui’s announcement in late November of 2018 that he had edited the genes of two newborns by making them resistant to HIV, worldwide experts and authorities decried the move, claiming it was far too risky a procedure for the benefits it aimed to achieve. They lamented his rogue action that failed to comport with the perceived consensus of norms that guarded against this kind of procedure. Thus, WHO set up its commission to create an international governance framework. And a group of experts penned their commentary in Nature calling for a moratorium on using CRISPR in genes that could be inherited before a more official code of ethical conduct could be created.
The fear is twofold: One is safety, but the other is adverse public reactions, such as viewing genetic modifications as creating “designer babies.” Hence, they both call for public deliberation in the imagining of consequences and creation of standards. Hence, transparency and trust are called for.
In many ways, democracies are founded on mistrust. Mistrust leads us to demand greater control and power over deciding what laws and systems should affect our lives, particularly in light of (for example) historic abuses by a monarch. We traditionally defend democracies using these rights-protecting reasons. In conjunction, almost all critics of democracy, beginning with Plato, decry democracies for failing to achieve this task effectively or peacefully, or for merely creating mob rule.
In Federalist 10, James Madison defends a representative system as a remedy to some of these problems, arguing that representation is meant to “refine and enlarge the public view by passing them through the medium of a chosen body of citizens, whose wisdom may best discern the true interest of their country…”. This “filtration of talent” endures today: We choose the people to create laws for us because, the theory goes, we ourselves are deemed incapable of creating those very laws. From there, we find ourselves bolstering voting rights and contesting gerrymandering as the ultimate protections of democracy.
Transparency therein is often valued because it allows the people to hold their representatives accountable to these necessary rights-protections, to ensure that representatives are in fact doing their job so that we know whether to keep them in office or vote them out next election. Trust is thus forged and lost in moments of transparency, when we find that the values of our representatives or elites do or do not align with ours. To trust is itself is a passive condition, for we allow others to act on our behalf without being able to always be in control. Some transparency can thus decrease when trust is increased: once trust is present, the representative needs to reveal less and less in order to maintain power.
There are of course many good reasons to eagerly welcome gene editing into the world. In particular, it can help individuals who suffer from single-gene disorders, such as cystic fiborosis, hemophilia and sickle cell. It also has great promise for various kinds of gene therapies that could cure some kinds of cancers, heart disease and neuro and muscular degenerative. It is often evoked as a solution to various problems of climate change, such by providing biofuels, saving species and protecting plants, not the least significant of which would save our chocolate supply. So while small groups of individuals fear the “designer baby” problem, most people support using gene editing to solve medical problems, with many worrying more specifically about the distribution of resources that could exacerbate health inequalities, and the premature use of it before we “fully understand effects on health.”
In keeping with the “filtration of talent” goals, the US has also developed a technocratic system to dictate technology regulation, what we now refer to as the “administrative state.” Beginning in the early twentieth century, our expert-ruled system of regulatory agencies, wields significant power in regulating biotech, and often regulations stem from advisory committees and commissions as well. For example, in 2010, the National Academy of Sciences convened such a system of experts to examine gene drive technology (made possible by CRISPR). In their report, the committee declared their commitment to “public trust” which relies on “transparent decision making,” and they stated they were eager for a sort of “opportunity to participate directly in governance.” It was clear though that this yearning for public trust in their findings exists in order to avoid backlashes, such as was witnessed in the case of GMOs and vaccines’ safety.
Similarly, the United Nations released a report emphasizing precaution, though not a moratorium, against gene drives, while also calling for “local communities and indigenous groups potentially affected by such a release [to] be consulted.”
This form of transparency and trustworthiness is meant to create the conditions for conformity, for a sort of passive allowance of government and its regulations to rule and act in our lives. This is necessarily in tension with what we think democracies should do: empower the average person to dictate how law controls their lives.
It is also against the best arguments and evidence we have for how science progress and human values flourish in tandem. After all, it seems that the most productive approach to assuaging fears and mistrust is to listen, respect, and carefully work with the parents. Further, public skepticism can actually fuel progress. John Stuart Mill advanced a formulation of the right to free speech, not as an intrinsic right, but as a constant process of weeding out the bad arguments and perfecting them, leading us to progress. Trust in absolute authority without a healthy dose of skepticism and dissent gives way to abuses and failure of democratic systems to check expert rule without creating mass system-wide mistrust the harms health and progress.
But this of course rubs a lot of people the wrong way: Could it really be the case that the dissidents who hold fringe and somewhat radical views should be enveloped in a conversation? Why lend the flat-earthers a microphone to air their unfounded grievances or perspectives? Do we really after all want to include climate change deniers at the table of climate change policy?
Choosing the most extreme examples is a losing battle however, for throughout history we can point to countless other moments where experts were wrong, and currently can still point to “experts” that hold “anti-science” views. Consider further that the HIV/AIDs epidemic: Tthe popular theory initially was it was a “gay disease,” but through activism and active participation, scientists developed better founded understanding of the virus, and now phenomenally effective treatments. The point here is not that citizens do better than scientists, but rather that the combination is ideal.
When it comes to the goals of transparency and trust, they are best realizable through these direct and diverse deliberative participatory forums. People feel heard, respected, and engaged in these moments. As Stephanie Solomon and Julia Abelson argue: “Policy issues that are well suited to public deliberation have one or more of the following characteristics: conflicting public values, high controversy, combined expert and real-world knowledge, and low trust in government. The deliberation process can help members of the public work through the complexity of the issues and build trust in the policy-making process.”
Naomi Scheinerman is a PhD candidate in Yale University’s Department of Political Science. Her work lies at the intersection of democratic theory, ethics, philosophy of science and epistemology, with special interest in re-imagining the authority of the science expert, the lay person and their governing institutions in regulating new and emerging technologies