Heather Douglas (2000, 2009) has argued that inductive risk requires that scientists make value judgments in the “internal” processes of scientific reasoning, e.g., data characterization and interpretation and judging whether the evidence supports a hypothesis, but that the role for value judgments must be limited to an indirect role. There has been some controversy about just what the direct/indirect roles distinction amounts to (Elliott, Steele), but the basic idea is easy enough to understand: something plays a direct role in a decision if it acts as a reason for deciding one way or the other; it plays an indirect role if it instead helps determines second-order questions about the uptake of reasons, e.g., about what counts as a reason or about determining the necessary weight of reasons before deciding.
To use one of Douglas’s examples, in deciding to characterize a slice of liver from a rat exposed to a particular chemical, the only permissible reasons for classifying it as cancerous or non-cancerous are visible features of the liver as observed in the microscope. These features should be compared to the established criteria, and judgment calls must be made where the application of criteria is not straightforward. Values play a role at the further level of guiding those criteria and judgment calls. Insofar as determining criteria and judging borderline cases involve uncertainty, value judgments about the seriousness of misjudging the case should lead us to lean towards more false positive or false negative classification of the livers as cancerous.
By contrast, a simple case of practical reasoning shows both evidence and value judgments as playing direct roles in decision-making. Suppose I need to decide where to go to lunch. Various value judgments are directly relevant: I have to weight the taste, the healthiness, the distance to and wait time at particular restaurants, etc. Facts obviously constrain my decision: the typical quality of the food, various health factors, the distance from my office, the average wait time, etc. It is a commonplace that both reasons are necessary to make such a decision. (N.B. Nor do the two kinds of reasons typically compete—I may prefer taste to healthiness, and so pick the burger joint, but that doesn’t make the double bacon cheeseburger healthy.)
Hopefully the examples are adequate to give an intuitive sense of the direct/indirect distinction, and we can put aside any technical details (for more, see …). In this paper, I want to argue contra Douglas that value judgments do have legitimate direct roles to play in the internal processes of scientific inquiry. I will describe three such roles:
1. Conceptual Choice
Scientists must frequently make linguistic choices about which concepts to use in their research, including concepts for describing observations and experiments as well as concepts that figure in hypotheses, theories, and models. In many areas of science, especially the human sciences, biology, and biomedical science, these concepts are thick normative concepts, i.e., they have not only descriptive but also evaluative content. Typical examples of thick concepts include cruel, kind, and perverted. In the case of sciences, common thick concepts include those relating to race, gender, family, wealth, health, disease, and intelligence. A major accomplishment in feminist philosophy has been in making clear the unrecognized normative content in many of our concepts, especially those relating directly to sex, gender, and sexuality, as well as the often unconscious influence of inegalitarian biases in choosing and using those concepts.
If we accept ideas couched in terms of such concepts, then those ideas will prescribe or proscribe behaviors, rather than providing neutral information. Conceptual choice in science thus requires evaluative analysis of the concepts themselves and the consequences of their adoption.
Perhaps most significant, if C.S. Peirce, John Dewey, and Joseph Rouse are right about the nature of concepts, i.e., that the content of any concept whatsoever, including abstruse scientific concepts, includes a normative component, than this direct role for values will be quite expensive.
2. The Consequences of Truth
In her discussions of inductive risk, Douglas frequently emphasizes the consequences of error as a key reason for accepting the (indirect) role for values in science. Because we make decisions under uncertainty, and thus have to balance the chances of falling into different types of errors (e.g., false positive vs. false negative), we need to evaluate the consequences of those errors, and often social values will be relevant. Again, with decisions about classifying cancerous livers, too many false negatives will lead us to underestimate the carcinogenic capacity of the toxin in question, and thus potentially cause people to be exposed to unsafe levels. Too many false positives, on the other hand, may hinder economic and technological progress. These social consequences must be weighed in determining tolerances.
There is a curious asymmetry in this approach: why should we worry only about the consequences of error, but not the consequences of being correct? Just as much as being wrong, being right can generate a variety of foreseeable social consequences. Why regard scientists as morally responsible in one case, but not in the other?
For example, suppose a social scientist is studying the link between intelligence and race, and gathers evidence which appears to support the hypothesis that blacks have lower native intelligence than whites. It is foreseeable that prominent researchers accepting such a conclusion could have negative, perhaps drastic consequences, preventing, delaying, or even reverting progress towards egalitarian social justice in a variety of areas. One direct role for values could be to veto the acceptance of hypotheses under such conditions.
And in fact, the history of research on race and IQ shows us that such ideas have often been accepted prematurely, because of overlooked methodological problems, etc. A direct role for values as a veto on accepting conclusions with potentially drastic social consequences would have sent such researchers back to the drawing board until such problems were discovered or the line of research was abandoned (the latter being in some respects the preferable outcome).
3. Scientific Decisions and Practical Reason
According to pragmatists like John Dewey, there is no difference in kind between theoretical and practical reason. Any decision is a decision to act in a certain way, from the banal daily decisions about what to eat for lunch to the abstruse reaches of modern science. The result of any inquiry whatever is a judgment, a decision to act in order to resolve a problem in a particular context. (N.B. One version of the pragmatist theory of truth: “A judgment is true iff the decision to act successfully resolves the problem that spurred the inquiry in context.” This account of truth is an independent point from the claim about judgment, however.)
If all judgment is a species of practical judgment, then just like the decision about where to eat lunch, any judgment requires both a factual basis and evaluative reasons for action. The decision to characterize data in a certain way, or to accept a certain hypothesis on the basis of a certain body of evidence, thus requires in all cases both evidence and values in a direct role in judgment. These values involve (A) the purposes for which we engage in inquiry, and (B) our evaluation of the consequences of our decision.
One option here is to limit the type of values appropriate to such decisions; in the internal judgments of scientific inquiry, only epistemic values are appropriate. Douglas (2009) has, however, exploded the relevance of the epistemic/non-epistemic values distinction to such issues; type of value does not determine whether the value is relevant or irrelevant, permitted or proscribed. While we may agree that our purposes in doing so-called “basic” science may frequently lean towards the “purely” epistemic, no in-principle limitation seems legitimate.