Managing value-laden judgements in regulatory science and risk assessment
Abstract
This paper argues that value-laden judgements play an important role in regulatory science and risk assessment. These judgements include choices about what topics to study; what questions to ask about those topics; how best to design studies to answer those questions; how to collect, analyse, and interpret data; and how to frame and communicate findings. Rather than defending a ‘value-free ideal’ for responding to these judgements, the paper calls for a ‘value-management ideal’ based on three principles: (1) value-laden judgements should be handled as transparently as possible; (2) these judgements should be made in ways that reflect social and ethical priorities; and (3) they should be made in a manner that is informed by engagement among interested and affected parties. Based on these principles, the paper suggests several strategies for moving forward to handle value-laden judgements in regulatory science and risk assessment in a responsible manner. First, decision makers should become more comfortable with scientific disagreement, finding ways to respect different positions on value-laden judgements and formulate policies despite inconclusive evidence. Second, those engaged in regulatory science should explore creative ways to clarify important judgements and communicate how they are being handled. Third, institutional processes for setting standards and guidelines for regulatory science and risk assessment should be scrutinised to ensure that they provide fair opportunities for all interested and affected parties to participate in and inform those processes.
1 Introduction
In recent decades, philosophers of science have reflected extensively on the roles that values can and should play in scientific research and risk assessment. The majority of these scholars has argued that it is unreasonable to expect scientists to avoid making value-laden judgements, particularly in policy-relevant areas of science. However, this conclusion raises important questions about how to manage these judgements responsibly. Section 2 highlights the value-laden nature of scientific research and the importance of pursuing creative approaches for handling values in science responsibly. Section 3 proposes three principles for managing value-laden judgements in science: (1) promoting transparency about these judgements; (2) striving to make value-laden judgements in ways that reflect social and ethical priorities; and (3) fostering engagement among interested and affected parties about important judgements. Based on these principles, Section 4 suggests several strategies for moving forward to handle value-laden judgements in regulatory science and risk assessment in a responsible manner.
2 Value-laden judgements in science and risk assessment
When scientists engage in research, they make numerous judgements that are not settled entirely by logic and evidence (Kuhn, 1977; McMullin, 1983; Longino, 1990). These judgements include choices about what topics to study; what questions to ask about those topics; how best to design studies to answer those questions; how to collect, analyse and interpret data; and how to frame and communicate their findings (Douglas, 2016; Elliott, 2017). I will refer to these judgements as ‘value laden’ when they have ethically or socially important consequences.1 This label is apt because scientists end up promoting particular values (e.g. public health, sustainability, jobs, or economic development) depending on how they make these judgements that have social consequences. For example, when scientists pursue agricultural research on high-yielding seeds as opposed to pursuing research on agroecological techniques, their decisions help advance the interests of large-scale farmers in high-income countries while potentially neglecting the interests of some small-scale farmers in lower income countries who seek less resource-intensive agricultural approaches (Lacey, 1999).
The practices associated with regulatory science and risk assessment tend to be particularly value laden because there are so many uncertainties that need to be addressed and so many choices that need to be made in the absence of decisive evidence (Hartley and Kokotovich, 2018). For example, scientists and risk assessors have to choose what biological endpoints to study and how long to examine them, what animal models to use, how to extrapolate results (e.g. from high doses to low doses, from animals to humans, from less sensitive individuals to more sensitive individuals), how to weigh findings from different studies, how to model exposures and how to characterise the overall level of risk (Silbergeld, 1991; NAS, 1996; NRC, 1996; Elliott, 2014; Kokotovich, 2014). It is clear that these choices can have important social consequences. For example, running a study with a particularly sensitive animal model might result in an assessment that overestimates risk, whereas running a study with a particularly insensitive animal model might result in underestimating risk. Similarly, choosing what safety factor (e.g. 10-fold, 100-fold, 1,000-fold) to use for extrapolating effects from animals to humans and from less sensitive individuals to more sensitive individuals obviously influences whether risk assessments are more supportive of public and environmental health or more supportive of economic development.
While value-laden judgements are especially obvious in the practices of regulatory science and risk assessment, they permeate scientific research more broadly. For example, scientists are forced to make value-laden judgements when deciding how much evidence to demand to draw conclusions (Douglas, 2009; Elliott and Richards, 2017). Whenever scientists make inductive inferences, they run the risk of making either false-positive or false-negative errors; this is sometimes labelled ‘inductive risk’ (Douglas, 2000). When drawing conclusions that are likely to have social consequences, deciding whether to demand more evidence (and therefore running a greater risk of making false-negative errors) or less evidence (and therefore running a greater risk of making false-positive errors) is an important, value-laden judgement (Douglas, 2009).
For a particularly vivid illustration of the role that values can play in addressing inductive risk, consider the influential testimony given to the US Congress by James Hansen in June 1988. He famously declared, ‘Global warming has reached a level such that we can ascribe with a high degree of confidence a cause and effect relationship between the greenhouse effect and observed warming. It is already happening now’ (Shabecoff, 1988). This was a very important claim because many scientists at that time were not comfortable affirming that the effects of climate change could already be observed. Alan Robock, a researcher at the University of Maryland, declared that ‘what bothers a lot of us is that we have a scientist telling Congress things we are reluctant to say ourselves’ (Kerr, 1989, p. 1043).
In this highly visible case, Hansen and his critics disagreed about how much evidence was needed to justify concluding that observed warming was caused by climate change. On the one hand, Hansen thought that the potential effects of climate change were so significant that it was justifiable to draw a conclusion; he said it was time to ‘stop waffling, and say that the evidence is pretty strong that the greenhouse effect is here’ (Weart, 2014). On the other hand, some of his critics insisted that it was important for researchers to maintain higher standards of evidence to maintain trust in their conclusions. For example, Danny Harvey of the University of Toronto said, ‘Jim Hansen has crawled out on a limb. A continuing warming over the next 10 years might not occur. If the warming didn't happen, policy decisions could be derailed’ (Kerr, 1989, p. 1043). It is particularly important to note that even if the scientists involved in this dispute had not been consciously thinking about the social consequences of their actions, their decisions would still have been value laden in the sense that they would have been making judgements that had significant social consequences, but that were not constrained by the available evidence.
Decisions about how to frame, describe, characterise and define phenomena are also frequently value laden. Consider the case of endocrine disruption. From the early stages of research on this phenomenon, there have been conflicts about how to define and characterise it (Elliott, 2009). The authors of an early US National Academy of Sciences report preferred not to use the term ‘endocrine disruptor’ at all and instead referred to ‘hormonally active agents’ because the authors were concerned that ‘the term [endocrine disruptor] is fraught with emotional overtones and was tantamount to a prejudgement of potential outcomes’ (NRC, 1999, p. 21). Even when using the term ‘endocrine disruptor’, prominent agencies have disagreed about how to define it. For example, when the US Environmental Protection Agency (EPA) developed its Endocrine Disruptor Research Programme in the mid-1990s, it defined an endocrine disruptor as ‘any exogenous agent that interferes with the production, release, transport, metabolism, binding action, or elimination of natural hormones in the body…’ (Krimsky, 2000, p. 82, italics added). In contrast, the Organisation for Economic Cooperation and Development (OECD), the European Union and the World Health Organization (WHO) defined an endocrine disruptor as ‘any exogenous substance that causes adverse health effects … consequent to changes in endocrine function’ (Krimsky, 2000, p. 88, italics added).
These definitions are important because evidence can indicate that a substance interferes with the hormonal system without indicating that it causes adverse health effects. Moreover, developing criteria for deciding what counts as interference or as causing adverse effects is even more complicated. Decisions about how to characterise endocrine disruptors for identifying and regulating them have given rise to heated disputes (see e.g. Elliott and Resnik, 2014; Solecki et al., 2017). So, the case of endocrine disruption illustrates how decisions about defining and characterising phenomena can be value-laden in the sense that they are not settled by evidence, but they can have very significant social consequences (Elliott, 2009).
The presence of value-laden judgements throughout scientific research raises important questions about how to address these judgements. Philosophers of science have recently written a good deal about this issue (e.g. Douglas, 2009, 2016; Wilholt, 2009; Kourany, 2010; Elliott, 2017; Elliott and Richards, 2017; de Melo-Martín and Intemann, 2018). Most scholars now reject the value-free ideal, which is the notion that scientists should avoid considering ethical and social values when deciding how to make these judgements (Douglas, 2009; Elliott, 2017). Especially given the pervasiveness of value-laden judgements in some domains, such as regulatory science and risk assessment, it is difficult to maintain that those working in these areas should simply ignore social consequences. To do so would seem irresponsible. For example, Douglas (2009) argues that scientists, like all individuals, have ethical responsibilities to take the foreseeable consequences of their actions into account. So, she concludes that scientists should consider the social consequences of their value-laden judgements when deciding how to make them. Nevertheless, this position raises a number of questions. To what extent should individual scientists make these decisions as opposed to deferring to standards created by the scientific community? Whose values should be considered when making these judgements? How can values be incorporated into making these judgements without sacrificing scientific integrity and objectivity? The next section proposes some basic principles of a ‘value-management ideal’ that can help to answer these questions.
3 A value-management ideal
Whereas the value-free ideal was supposed to protect the integrity of science by preventing ethical and social values from influencing scientists’ decision-making, the value-management ideal sketched here is designed instead to help scientists address values thoughtfully. In my book, A Tapestry of Values: An Introduction to Values in Science (Elliott, 2017), I argue for three principles that can guide scientists in handling values responsibly: transparency, representativeness and engagement. I do not consider these principles to be strictly necessary or sufficient for preserving scientific integrity; instead, they operate more like ‘rules of thumb’ (Elliott, 2018). In some situations, one principle may be more essential, whereas a different principle may be more important in other situations. In general, though, the integration of all three principles is important for handling values in science.
According to the principle of transparency, scientists should be as clear as possible about their ‘data, methods, models, and assumptions so that others can identify the ways in which their work supports or is influenced by particular values’ (Elliott, 2017, p. 14). Ideally, transparency allows others to understand how the results of a scientific analysis could have been different if important judgements were made differently. However, even when such a high level of transparency is not achieved, efforts to achieve some degree of transparency can at least help others to recognise that value-laden judgements have been made. So, the recipients of this information are warned that they could arrive at different conclusions if those judgements were made differently.
The principle of transparency accords well with the recent growth of the open science movement (see e.g. Royal Society, 2012; European Commission, 2014; Nosek et al., 2015; NAS, 2018). This movement encourages a number of practices designed to promote transparency in science: publishing in open-access journals (Else, 2018); making all the data, materials, and computer code associated with scientific studies available (Nosek et al., 2015; NAS, 2018); pre-registering studies so that the planned study design is known (Kupferschmidt, 2018); reporting the progress of studies in real time so that other scientists can provide input (Foster and Deardorff, 2017; NAS, 2018, p. 114); and promoting the systematic publication of all studies, including those with both negative and positive results (Chalmers et al., 2013). Nevertheless, while these practices are valuable for promoting transparency in a general sense, most of them promote transparency about value-laden judgements only in an indirect manner. For example, making all study data openly available does not directly provide information about value-laden judgements; instead, providing access to data makes it possible (at least in principle) for others to reanalyse the data and explore whether important value-laden judgements were made when analysing them originally. So, additional approaches to promoting transparency about value-laden judgements may be needed, such as collaborations between scientists and policymakers or journalists in an effort to make important judgements clearer to decision makers.
Although transparency is extremely important because of the ways it enables scientists and other stakeholders to recognise the potential for arriving at different conclusions, it also has limitations. One problem is that perfect transparency is impossible to achieve; it is impractical to think that scientists or risk assessors could disclose all their value-laden judgements. Another problem is that transparency is only a limited solution; it provides a warning that the results of a study might have been influenced by important judgements, but it does not always reveal how the results would have differed if those judgements had been made differently. For example, if scientists were to disagree with the way data were analysed or interpreted in a particular study, they could potentially go back and perform a different analysis of the same data. However, if scientists were to disagree with the types of data that were collected in a study, they typically would have no recourse but to perform a new study. For this reason, it is important to perform studies as thoughtfully as possible the first time, making value-laden judgements in ways that serve broad social interests; this is the goal of representativeness.
The principle of representativeness is based on the notion that value-laden judgements should be made in a manner that represents major social and ethical priorities. As I put it in my book: ‘When clear, widely recognized ethical principles are available, they should be used to guide the values that influence science. When ethical principles are less settled, science should be influenced as much as possible by values that represent broad societal priorities’ (Elliott, 2017, pp. 14–15). So, for example, procedures for risk assessment in the USA and the European Union are typically designed to overestimate rather than underestimate the risks associated with industrial chemicals because protecting public health is taken to be a major social and ethical priority.
The difficulty with representativeness is that, in contemporary societies, there are typically diverse views about how best to balance and prioritise ethical principles. For example, while public health is undoubtedly an important value, economic development is also important. So, there are not only benefits but also costs associated with making value-laden judgements in ways that overestimate the toxicity of industrial chemicals. Deciding on the appropriate trade-off between these values is not an easy matter, and reasonable people can disagree about how to handle them. Fortunately, scientists and risk assessors do not always have to make these decisions themselves. Depending on the kind of judgement being made, some of these decisions (e.g. choices about what topics to study, what questions to ask, or how to apply scientific results to policy decisions) may be made largely by risk managers or policymakers. In addition, many choices about how to design and interpret the studies that inform risk assessments are specified by standardised guidelines provided by government agencies or institutions like the OECD. Nevertheless, this merely pushes the problem to a different context; even if many of the value-laden judgements associated with a particular research project are handled by others, disputes about how to handle these judgements still need to be addressed. This challenge highlights the need for engagement so that different stakeholder groups can discuss their priorities and deliberate about how best to make difficult scientific judgements.
Engagement consists of ‘efforts to interact with other people or institutions in order to exchange views, highlight problems, deliberate, and foster positive change’ (Elliott, 2017, p. 138). In my book, I argue that engagement can take a number of forms. It can involve collaborations between scientists and the public, such as the creation of community-based participatory research efforts in which the public can directly influence value-laden judgements in science (Epstein, 1996; Ottinger, 2010; Suryanarayanan et al., 2018). In other cases, it can involve efforts by social scientists to solicit information about public priorities on emerging scientific and technical issues like gene editing or nanotechnology (see e.g. Davies et al., 2009). It can also involve interdisciplinary research collaborations, in which scholars from different disciplinary fields or employment sectors work together to identify implicit assumptions that might otherwise go unnoticed (see e.g. Schienke et al., 2011; Hartley and Kokotovich, 2018). Importantly, it can also involve efforts to develop, implement and critique laws, regulatory requirements, standards or other institutional policies that steer value-laden judgements in science and risk assessment. For example, the OECD sets many of the standards that determine how regulatory studies and risk assessments are performed. Implementing a fair, transparent process for bringing different stakeholders together to set and critique these standards is crucial if value-laden judgements in regulatory science and risk assessment are to be handled responsibly (Wickson and Forsberg, 2015; Elliott, 2016).
Unfortunately, the outcomes of engagement efforts depend a great deal on who is included and how the procedures for engagement are structured (Kourany, 2018). If important stakeholders are excluded, or if they have access to inadequate resources to defend their views effectively, or if their voices are not heard, engagement processes may not yield fair outcomes. In addition, many of the value-laden judgements made by scientists and risk assessors involve nitty-gritty technical details, and it is unrealistic to try to promote engagement about all these decisions (Winsberg, 2012). So, it would be foolhardy to depend solely on engagement to ensure that value-laden judgements are made in a responsible fashion. Some stakeholders are bound to be disappointed with the ways in which important value-laden judgements have been made, which brings us back to the first principle: transparency. Even if some stakeholders insist that important value-laden judgements have been made in a way that fails to represent social priorities or that fails to incorporate appropriate engagement, at least researchers can strive to make judgements explicit enough so that those who disagree with them can recognise the problem and pursue alternative studies or alternative analyses.
4 Strategies for handling value-laden judgements
If the arguments discussed in Section 2 and the principles proposed in Section 3 are compelling, they suggest several strategies for moving forward responsibly to handle values in regulatory science and risk assessment. First, if value-laden judgements are indeed ubiquitous in these areas of science, then scientists, policymakers and members of the public should become more comfortable with scientific disagreement (Wickson and Wynne, 2012; Elliott and Resnik, 2015). Consider, for example, comments that Bernhard Url, the Executive Director of the European Food Safety Authority (EFSA), provided in a piece reflecting on the differing assessments of glyphosate provided by EFSA and the International Agency for Research on Cancer (IARC). Url rightly noted, ‘That the agencies reached different conclusions is not surprising: each considered different bodies of scientific evidence and methodologies’ (Url, 2018).
The recognition that different scientists and organisations can reasonably arrive at differing conclusions as a result of making different value-laden judgements should alleviate some of the suspicion and rancour that frequently occurs in these situations. As Url (2018) noted, it can be tempting to attack or dismiss opposing conclusions as the result of financially or ideologically motivated refusals to accept the available evidence. However, recognising that regulatory science and risk assessments are pervaded by value-laden judgements opens the door to interpreting different conclusions as the result of reasonable disagreements about how to handle these judgements. While these different approaches to judgements may indeed be subtly influenced by financial and ideological values (Elliott and Resnik, 2015), they are typically not the result of stubborn refusals to consider the evidence.
Another aspect to becoming more comfortable with scientific disagreement is to develop better strategies for incorporating contested science in policymaking. Policy expert Roger Pielke (2007) has argued that politicians, special interest groups and even scientists themselves are often tempted to treat science as if it were a straightforward, value-free source of information. It would be easier for all of them if they could maintain that science provides univocal answers that force specific policy responses. Taking scientific disagreement seriously means that policy makers, politicians, and the public need to reflect on their values to decide how to act in response to ambiguous scientific information (Pielke, 2007; Sarewitz, 2007).
A second strategy for handling values responsibly is to pursue creative approaches for achieving greater clarity about the most important value-laden judgements being made and the ways in which they are handled. If differing approaches to these judgements are often responsible for scientific disagreements, then these disagreements could plausibly be ameliorated by making these judgements easier to scrutinise. As discussed in Section 3, the open science movement is promoting a number of initiatives designed to achieve greater transparency, but these initiatives are not always effective at clarifying important judgements, and they can be difficult to implement in the context of regulatory science and risk assessment. For example, one particularly effective way to help identify important value-laden judgements is to make all study data openly available for other researchers to reanalyse. Unfortunately, much of the science performed for regulatory purposes is funded by industry, and private companies often face strong incentives not to make the data underlying these studies publicly available. Nevertheless, some companies are taking steps to make more of their data available; Bayer's Transparency Initiative is one example of these efforts (https://www.cropscience-transparency.bayer.com/). The Transparency and Openness Promotion (TOP) guidelines provide a model towards which those working in regulatory science and risk assessment can be striving (Nosek et al., 2015).
In addition to promoting data transparency, those working on regulatory science and risk assessment can take other steps to identify important value-laden judgements. For example, the Consortium Linking Academic and Regulatory Insights on BPA Toxicity (CLARITY-BPA) represents a creative effort to clarify the important judgements that may be contributing to disagreements about health risks associated with exposure to bisphenol A (BPA) (see e.g. Schug et al., 2013). Many academic researchers have found evidence that BPA could have harmful effects at low doses, whereas most industry-funded studies conducted for regulatory purposes have not generated similar concerns (Myers et al., 2009). By creating a collaboration between the US Food and Drug Administration (US FDA), the US National Toxicology Program (NTP), and a number of academic researchers funded by the National Institute of Environmental Health Sciences (NIEHS), the CLARITY-BPA consortium aimed to generate greater clarity about the underlying reasons for disagreement between academic and industry studies.
Interdisciplinary collaborations between natural scientists, social scientists, scholars from the humanities, and the public with other forms of expertise can also help uncover important value-laden judgements associated with regulatory science (Elliott, 2017). For example, the US National Academy of Sciences recently launched an Environmental Health Matters Initiative (EHMI) to ‘harness and mobilise cross-sector and transdisciplinary knowledge and strategies that take into account a holistic view of the factors at work in complex environmental health challenges and opportunities’ (http://nas-sites.org/envirohealthmatters/about/). By incorporating scholars and practitioners with many different forms of expertise, the initiative strives to identify important value-laden judgements associated with environmental health research (e.g. choices about what questions to ask or what interventions to pursue) that scholars working from individual disciplinary perspectives might not recognise.
When crucial value-laden judgements are known to researchers, they may be able to partner with other groups to communicate about these judgements more effectively to policy members and members of the public. Training programmes like those provided by the Alda Center for Communicating Science (https://www.aldacenter.org/) or the Leopold Leadership Program may help scientists develop strategies for communicating more effectively about their work (Schubert, 2018). A limitation of these programmes is that they tend to be more focused on helping researchers provide clear, engaging stories than on providing information about important value-laden judgements. Nevertheless, information about these judgements can often be added without significantly muddying the main story that scientists seek to communicate (McKaughan and Elliott, 2018). Science journalists may also be particularly well trained to help make important judgements clear for broader swaths of the public (Angler, 2017).
A third strategy for handling values responsibly is to scrutinise the standard-setting processes at organisations like the OECD that generate the guidelines used for regulatory studies and risk assessments. These guidelines encode values in regulatory science because they specify how a wide range of value-laden judgements are to be made (Hartley and Kokotovich, 2018). Ideally, the processes for creating these standards and guidelines should provide an opportunity for fruitful engagement among all interested and affected parties. This engagement can serve at least three purposes: (1) identifying ways in which particular standards and guidelines support some social values over others; (2) reflecting on which social values to prioritise when setting the standards and guidelines; and (3) working through disagreements about which social values to prioritise (Elliott, 2018). Unfortunately, the standard-setting processes employed by organisations like the OECD can sometimes be difficult for civil society organisations to penetrate (Wickson and Forsberg, 2015; Elliott, 2016). The result is that players with significant political and financial resources end up with an advantage in their efforts to influence these processes, therefore generating suboptimal forms of engagement. So, a priority should be to create fair opportunities for all interested and affected parties to participate in and inform these standard-setting processes.
5 Conclusions
This paper has argued that value-laden judgements play an important role in regulatory science and risk assessment. To address these judgements responsibly, the paper proposed three principles: (1) these judgements should be made as transparent as possible; (2) they should be made in ways that reflect social and ethical priorities; and (3) they should be made in a manner that is informed by engagement among interested and affected parties. Building on these principles, the paper suggested several strategies for moving forward to address value-laden judgements in a responsible manner. First, decision makers should become more comfortable with scientific disagreement, finding ways to respect different positions on value-laden judgements and to formulate policy despite inconclusive evidence. Second, those engaged in regulatory science should explore creative ways to clarify the important value judgements being made and the ways in which they are handled. Third, institutional processes for setting standards and guidelines for regulatory science and risk assessment should be scrutinised to ensure that they are as fair as possible, providing opportunities for all interested and affected parties to participate in and inform these processes.
Note
References
Abbreviations
-
- BPA
-
- bisphenol A
-
- CLARITY-BPA
-
- Consortium Linking Academic and Regulatory Insights on BPA Toxicity
-
- EHMI
-
- Environmental Health Matters Initiative
-
- EPA
-
- US Environmental Protection Agency
-
- IARC
-
- International Agency for Research on Cancer
-
- NTP
-
- US National Toxicology Program
-
- NIEHS
-
- National Institute of Environmental Health Sciences
-
- OECD
-
- Organisation for Economic Cooperation and Development
-
- TOP
-
- Transparency and Openness Promotion
-
- US FDA
-
- US Food and Drug Administration
-
- WHO
-
- World Health Organization