[TL;DR: If a direct role for values is illegitimate it science, it is also illegitimate in any ethical or practical reasoning about what to do in particular cases, or any evaluations of the rightness or goodness of act action. The direct/indirect role distinction does not distinguish science from action.]
Those who defend or presume the value-ladenness of science are obligated to provide a response to what I call “the problem of wishful thinking,” viz., the epistemic problem of how to prevent value-laden science from leading us to believe whatever we wish, to conclude the world is the way we wish it to be, and thus to destroy the integrity and reliability of science.
One way of dealing with the problem of wishful thinking has been to restrict the type of values allowed to play a role in science to epistemic values. This is not a move most proponents of value-laden science will accept, as they are precisely concerned with the legitimacy of non-epistemic values in science. If the “epistemic” values include such familiar values as scope or simplicity of a theory, it is also insufficient to avoid the problem of wishful thinking, i.e., it may lead us to conclude that the world is simple or covered by a relatively small number of laws without any evidence to that effect.[^1]
Another important attempt to deal with the problem of wishful thinking is Heather Douglas’s introduction of the direct/indirect role distinction, and the prohibition on the use of values in the direct role in the internal processes of science. Here is how Douglas defines indirect and direct:
In the first direct role, the values act much the same way as evidence normally does, providing warrant or reasons to accept a claim. In the second, indirect role, the values do not compete with or supplant evidence, but rather determine the importance of the inductive gaps left by the evidence. More evidence usually makes the values less important in this indirect role, as uncertainty reduces. (Douglas 2009, 96).
The direct role is permissible in certain, relatively “external” decisions in science. For example, we may appeal directly to ethical or social values to defend the decision to pursue some research project over others, i.e., the decision to research improved treatments for malaria rather than improved treatments for male pattern baldness might be directly justified by the better realization of justice or alleviation of suffering of the former over the latter. Likewise, restrictions on research methods on human subjects, such as the requirement of informed consent and no unnecessary harm, should be directly justified by appeal to values, such as respect for persons and non-malfeasance.
The direct role, according to Douglas, is impermissible in internal decisions such as how to characterize data and whether or not to accept a hypothesis based on the evidence. Here, values may indirectly influence the standards of evidence, the amount or strength of evidence we require to accept or reject, but cannot tell directly for or against the hypothesis.
So, on Douglas’s account, there is a distinction to be made between practical decision-making that is directly grounded by values, and scientific inference that is directly grounded by evidence and only indirectly warranted by values. Some philosophers have questioned the clarity of this account (e.g., Elliott 2011), or its appropriateness to the epistemic tasks of scientific inference (Mitchell 2004), but that will not be my tack here. I want to start by questioning Douglas’s account of practical reasoning. I believe that the problem of wishful thinking is as much a problem for practical reasoning as for scientific inference, and that the “direct” role for values is as unacceptable in ethical decision-making as it is in scientific inference. If I’m right about this, then Douglas’s account of the structure of values in science needs to be revised, and the indirect/direct role distinction is inadequate for distinguishing between science and action or science and ethical decision-making.
Consider some very simple cases of practical decision-making.
- SUIT: Suppose I am out to buy a suit, and I value both affordability and quality in making such a purchase. It would be wishful thinking to assume that any suit I buy will promote these values. In order to make a decision about which suit to buy, I need to gather evidence about the available suits on which to base my decision. My values tell me what kind of evidence is relevant. But they cannot act as reasons for or against any choice of suit directly.
- HIRING: Suppose I am trying to decide who to hire among a number of job candidates. On the one hand, I pragmatically value hiring the person with the best skills and qualifications. On the other hand, I have an ethical/political obligation to uphold fairness and foster diversity. Neither the pragmatic nor the ethical values tell directly for or against choosing any candidate. I need to know the qualifications of particular candidates to know their qualifications. I also need to know about the theories and results of the implicit bias research to know what kinds of evidence to downplay or to keep myself unaware of while making the decision.
- HONESTY: Suppose I am a Kantian about lying – it is never permissible. Still, this value does not dictate on its own what speech-acts I should make and refrain from in any particular case. I must at least examine what I know or believe to be true. It would be wishful thinking to assume I was being honest with anything I was inclined to say absent information about whether or not I believed it to be the case. Perhaps I even need to examine my evidence for p before I can assert confidently that p in order to uphold this value.
- METHODS: Suppose I am on the IRB at my university. In order to responsibly assess the permissibility of a particular research protocol, I cannot rely directly on the principles of respect, beneficence, non-malfeasance, and justice to decide. Instead, I must carefully read the research protocol and understand what it in fact proposes to do, and I must speculate on possible consequences of the protocol, before I can evaluate the protocol and its consequences.
So, values in these cases do not act directly as reasons for or against a decision. I take it that this is in conflict with Douglas’s implied account of practical reason in Science, Policy, and the Value-Free Ideal (2009). If there is any realm in which values themselves act as grounds in inferences, it may be in pure normative theorizing, the kind that ethics do when they’re doing “normative ethical theory” or political philosophers do when they’re doing “ideal theory.” Values can only serve as direct grounds for claims about other values (if that can do that), not about actions. But these are not the kinds of activities that Douglas points at as “direct” use of values. Indeed, METHODS is just the sort of case that she uses to explain the direct role.
Values in these cases are acting indirectly to connect evidence to claims or conclusions (in particular, about how to act). Is this the same sort of indirect role that she recommends for values in science? We might think so. Just as the value of affordability tells us to look for evidence about prices in SUIT, the relative weight we place on the value of safety tells us to look for a certain kind and weight of evidence when doing a risk assessment for the toxicity of the chemical.
Douglas could revise her account to insist that scientific inference be more indirect that the cases I’ve discussed here. While the absolute value I place on not lying in HONESTY cannot directly tell me what to say in any particular situation, it does tell me what sort of evidence I need (viz. that I believe the proposition) to justify the decision to speak. Douglas could insist that in scientific cases, not only is it illegitimate for values to directly justify, e.g., accepting or rejecting a hypothesis, they also cannot directly tell us whether some evidence supports the hypothesis or is sufficient for acceptance. Rather, the only thing that can legitimately make the connection between evidence and hypothesis is something like a methodological rule, e.g., the rule that the evidence must provide a statistical significance level of p=0.01 to be sufficient for acceptance. Then the only permissible role for values would be the even more indirect role of supporting such methodological rules. The ground for that conclusion is the data itself. That data warrants the conclusion because it meets the methodological criteria (p=0.01). That criteria is appropriate in this case because of our values (the values we place on the consequences of error).
This might or might not be a reasonable way to go. But the prohibition on the direct role was justified by the need to resolve the problem of wishful thinking, and I can see why (and have assumed that) this argument is compelling. But I cannot see that the revised version is needed for resolving the problem of wishful thinking as well, and so I am not sure why the additional prohibition it involves would be warranted.
In “Values in Science beyond Underdetermination and Inductive Risk,” I argued that the main lines of argument in the science and values literature go awry in making bad assumptions about the nature of values and ethical/practical reasoning. I think this point is of a piece with that argument. I’m interested to hear what folks think about it!
[1]: This is, in my opinion, one of the most brilliant moves in (Douglas 2009).