Three Direct Roles for Values in Science: A Sketch of a Sketch

Heather Douglas (2000, 2009) has argued that inductive risk requires that scientists make value judgments in the “internal” processes of scientific reasoning, e.g., data characterization and interpretation and judging whether the evidence supports a hypothesis, but that the role for value judgments must be limited to an indirect role. There has been some controversy about just what the direct/indirect roles distinction amounts to (Elliott, Steele), but the basic idea is easy enough to understand: something plays a direct role in a decision if it acts as a reason for deciding one way or the other; it plays an indirect role if it instead helps determines second-order questions about the uptake of reasons, e.g., about what counts as a reason or about determining the necessary weight of reasons before deciding.

To use one of Douglas’s examples, in deciding to characterize a slice of liver from a rat exposed to a particular chemical, the only permissible reasons for classifying it as cancerous or non-cancerous are visible features of the liver as observed in the microscope. These features should be compared to the established criteria, and judgment calls must be made where the application of criteria is not straightforward. Values play a role at the further level of guiding those criteria and judgment calls. Insofar as determining criteria and judging borderline cases involve uncertainty, value judgments about the seriousness of misjudging the case should lead us to lean towards more false positive or false negative classification of the livers as cancerous.

By contrast, a simple case of practical reasoning shows both evidence and value judgments as playing direct roles in decision-making. Suppose I need to decide where to go to lunch. Various value judgments are directly relevant: I have to weight the taste, the healthiness, the distance to and wait time at particular restaurants, etc. Facts obviously constrain my decision: the typical quality of the food, various health factors, the distance from my office, the average wait time, etc. It is a commonplace that both reasons are necessary to make such a decision. (N.B. Nor do the two kinds of reasons typically compete—I may prefer taste to healthiness, and so pick the burger joint, but that doesn’t make the double bacon cheeseburger healthy.)

Hopefully the examples are adequate to give an intuitive sense of the direct/indirect distinction, and we can put aside any technical details (for more, see …). In this paper, I want to argue contra Douglas that value judgments do have legitimate direct roles to play in the internal processes of scientific inquiry. I will describe three such roles:

1. Conceptual Choice

Scientists must frequently make linguistic choices about which concepts to use in their research, including concepts for describing observations and experiments as well as concepts that figure in hypotheses, theories, and models. In many areas of science, especially the human sciences, biology, and biomedical science, these concepts are thick normative concepts, i.e., they have not only descriptive but also evaluative content. Typical examples of thick concepts include cruel, kind, and perverted. In the case of sciences, common thick concepts include those relating to race, gender, family, wealth, health, disease, and intelligence. A major accomplishment in feminist philosophy has been in making clear the unrecognized normative content in many of our concepts, especially those relating directly to sex, gender, and sexuality, as well as the often unconscious influence of inegalitarian biases in choosing and using those concepts.

If we accept ideas couched in terms of such concepts, then those ideas will prescribe or proscribe behaviors, rather than providing neutral information. Conceptual choice in science thus requires evaluative analysis of the concepts themselves and the consequences of their adoption.

Perhaps most significant, if C.S. Peirce, John Dewey, and Joseph Rouse are right about the nature of concepts, i.e., that the content of any concept whatsoever, including abstruse scientific concepts, includes a normative component, than this direct role for values will be quite expensive.

2. The Consequences of Truth

In her discussions of inductive risk, Douglas frequently emphasizes the consequences of error as a key reason for accepting the (indirect) role for values in science. Because we make decisions under uncertainty, and thus have to balance the chances of falling into different types of errors (e.g., false positive vs. false negative), we need to evaluate the consequences of those errors, and often social values will be relevant. Again, with decisions about classifying cancerous livers, too many false negatives will lead us to underestimate the carcinogenic capacity of the toxin in question, and thus potentially cause people to be exposed to unsafe levels. Too many false positives, on the other hand, may hinder economic and technological progress. These social consequences must be weighed in determining tolerances.

There is a curious asymmetry in this approach: why should we worry only about the consequences of error, but not the consequences of being correct? Just as much as being wrong, being right can generate a variety of foreseeable social consequences. Why regard scientists as morally responsible in one case, but not in the other?

For example, suppose a social scientist is studying the link between intelligence and race, and gathers evidence which appears to support the hypothesis that blacks have lower native intelligence than whites. It is foreseeable that prominent researchers accepting such a conclusion could have negative, perhaps drastic consequences, preventing, delaying, or even reverting progress towards egalitarian social justice in a variety of areas. One direct role for values could be to veto the acceptance of hypotheses under such conditions.

And in fact, the history of research on race and IQ shows us that such ideas have often been accepted prematurely, because of overlooked methodological problems, etc. A direct role for values as a veto on accepting conclusions with potentially drastic social consequences would have sent such researchers back to the drawing board until such problems were discovered or the line of research was abandoned (the latter being in some respects the preferable outcome).

3. Scientific Decisions and Practical Reason

According to pragmatists like John Dewey, there is no difference in kind between theoretical and practical reason. Any decision is a decision to act in a certain way, from the banal daily decisions about what to eat for lunch to the abstruse reaches of modern science. The result of any inquiry whatever is a judgment, a decision to act in order to resolve a problem in a particular context. (N.B. One version of the pragmatist theory of truth: “A judgment is true iff the decision to act successfully resolves the problem that spurred the inquiry in context.” This account of truth is an independent point from the claim about judgment, however.)

If all judgment is a species of practical judgment, then just like the decision about where to eat lunch, any judgment requires both a factual basis and evaluative reasons for action. The decision to characterize data in a certain way, or to accept a certain hypothesis on the basis of a certain body of evidence, thus requires in all cases both evidence and values in a direct role in judgment. These values involve (A) the purposes for which we engage in inquiry, and (B) our evaluation of the consequences of our decision.

One option here is to limit the type of values appropriate to such decisions; in the internal judgments of scientific inquiry, only epistemic values are appropriate. Douglas (2009) has, however, exploded the relevance of the epistemic/non-epistemic values distinction to such issues; type of value does not determine whether the value is relevant or irrelevant, permitted or proscribed. While we may agree that our purposes in doing so-called “basic” science may frequently lean towards the “purely” epistemic, no in-principle limitation seems legitimate.

7 thoughts on “Three Direct Roles for Values in Science: A Sketch of a Sketch

  1. Hi Matt,

    Unsurprisingly, I quite like this as a starting point.

    1. Conceptual Choice
    Kevin Elliott, in /Little Pollution/, periodically discusses the way values inform the choice of language in research. See, for example, 72-76; there are also a bunch of page references under Value judgments|in developing language and terminology in his index. However, IIRC, his argument is that language choice relies on, inter alia, background assumptions about things like which causal relationships are more or less likely, and it’s these that are value laden. I don’t think I’ve seen anyone make the point in terms of thick concepts before. Though maybe John Dupré’s point about “rape” in evo psych and ethology is along these same lines.

    2. Consequences of Truth
    As you know, I pretty much agree, and I know Kevin Elliott’s asked questions about that in print. Maybe it would be nice if Heather would coauthor something with one of us on this point?

    The combination of points 1 and 2 leads to a very interesting question that’s been cropping up in my reading on GMOs and other food issues: Which conceptions of the positive and negative values? Appealing to #2, a proponent of GMOs might argue that, given the positive value of GMOs for feeding the world, we should set a low threshold for health and safety concerns about GMOs, and so standards like substantial equivalence are perfectly adequate. However, in line with #1, substantial equivalence relies on a rather reductive or non-holistic conception of health and safety: the same molecule behaves the same ways however it’s ingested. Opponents of GMOs often seem to work with holistic conceptions of health and safety, and (depending on how sophisticated they are) may point to complexities or context-dependencies in nutrition. (We can also make a similar point about the concept of “adequate nutrition” in “feeding the world,” I think.) And then their rival conceptions of health and safety may lead to different weighings of the positive and negative values.

    And finally all of this gets embedded in the need to make practical decisions about GMOs. 🙂

    • Hi Dan,

      Thank you for this.

      Of course you’re right that Kevin’s book is relevant to #1. After writing this, I had a nagging sense that I needed to revisit Little Pollution (great way of abbreviating the title by the way), and you’ve made the task easier. I’ll try to pull Kevin into the discussion if he has time. Are you saying that his point about choice of language is a species of Longino’s point about background assumptions? If so, then the point about thick concepts is distinct. I’ll also dig out Dupré.

      I’d really like to know whether I’m getting Joseph Rouse right on the normativity, if anyone knows that stuff well.

      On #2 – Heather has responded to Kevin and I on the issue of accounting for consequences of error v. consequences of truth in the following way:

      It is really important in my view that one only think about the consequences of error, not just consequences per se, of making a claim. The reason for this is that consequences of making a claim that have nothing to do with the claim being incorrect are not good reasons, in the sense of epistemic responsibility, for making a claim. So, for example, considering the good consequences of making a claim (regardless of whether the claim is correct or not), such as getting people to change their behavior in ways I think are beneficial, are not good reasons at all to make a claim– it is fundamentally manipulative and deceptive to make claims for that reason, and that deeply undermines the integrity and objectivity of science (both) to do so. So, the only consequences that should matter are the consequences of making an incorrect claim, i.e. the consequences of error, or that result from the fact that an error has been made. Broader sets of consequences are exactly what I am trying to exclude with the direct/indirect distinction. (Shared from email w/ permission.)

      I have to think about this more, but I think I agree with it as an important caveat. Still, I don’t think it eliminates the direct role for values that I envision: values as a veto on making a claim. We have to think about the consequences of making a claim in general not because it will give us a reason to make a claim (would this be “bullshit” in the technical sense of the term?), but because it might give us a reason to withhold making that claim, perhaps indefinitely. (I’m vaguely worried about this because there is very little pragmatic difference between withholding a claim and claiming its negation.)

  2. Something else just to mention: I’ve been working on reconstructing Heather Douglas’s account of direct vs. indirect roles for values in scientific reasoning by way of Toulmin’s logic of arguments. I think there are going to be some really exciting results here, which will be the subject of a later post.

  3. This is fun to see. I’m enthusiastic about all three direct roles for values that you discuss, Matt.

    First, as Dan Hicks kindly noted, I do emphasize the significance of values in the choice of concepts and terms in chapter 2 of my book. You’re correct, Matt, that my justification for incorporating values in such choices was more of a Longino-style argument. In a 2009 paper titled “The Ethical Significance of Languge in the Environmental Sciences” (in the journal Ethics, Policy, and Environment), I elaborate a bit on the argument by showing that language choices can have a variety of societal impacts, and so scientists have responsibilities to consider those impacts when making those choices. Regarding Dupre, I’d take a quick look at his chapter in the Oxford volume that he edited with Wylie and Kincaid; he may refer to “thick concept” sorts of arguments there.

    Regarding the consequences of making true claims, I’ve been trying to give Heather a hard time about this myself (I do so in my 2011 Phil Sci paper, and I’m working on some other papers that elaborate further). When scientists face genuine uncertainty but need to decide what claim to propound for decision-making purposes, I don’t think that it is manipulative or deceptive to be influenced by *all* potential consequences when deciding what claim to propound. But I think that you do need to be careful how you make this point. In the last paragraph of your post, you mention that scientists may have accepted claims prematurely, despite methodological problems. I think that Heather could argue that these worries could be captured by her focus on avoiding negative consequences of error.

    Finally, your comments about Dewey’s merging of theoretical and practical reason resonate with my own interests in how values legitimately influence scientific judgment because scientists employ cognitive attitudes other than belief. I’m not sure if I agree with Dewey that scientists never engage in pure theoretical reasoning (I just don’t know what I think about that), but I think that scientists often employ cognitive attitudes other than pure theoretical reasoning, and in those cases values often are directly relevant to the attitudes. I’m working on this toipc myself, so I’m a bit biased, but I think it’s a very promising avenue to pursue.

    • Hi Kevin!

      Thanks for your response. This discussion has been extremely helpful to me so far.

      On the Dewey point: I guess I would refine what I said – It’s not that Dewey thinks that scientists never engage in theoretical reasoning, but rather, he denies the dichotomy between theoretical and practical reason, and sees so-called “pure theoretical reasoning” as just a very special, out-of-the-way species of practical reasoning. (This is part of what I want to talk about at Bielefeld, so I’ll have plenty of time to work it through.)

      • I’m looking forward to hearing more of your thoughts on this in Bielefeld. By the way, I should probably have added that I think it might be helpful to separate direct role for values #2 from the other two categories. In a sense, direct role #2 seems like a general rejection of Douglas’s claim that values are acceptable only in the indirect role. Then categories #1 and #3 seem like interesting categories of examples where values do indeed have a direct role to play.

        Perhaps I’m wrong about that, but it’s worth taking a look at the three categories to consider whether they’re truly parallel.

        Thanks for cluing me in to the discussion!

        • Kevin,

          On your earlier point about “methodological problems,” my goal was to try to use the fallibility of all science to make a general point that we can’t just take into account consequences of error using our ordinary estimates of error. I see three possible cases:

          1. Cases in which claims are made prematurely, despite methodological problems, and which Douglas-style indirect role would prevent us from making the claim.
          2. Cases in which the claims made seem to be solidly supported by strong evidence and the best methodology we have, but are nonetheless false, and making the claim would have devastating consequences. Those consequences seem good reasons not to make / accept the claim, hence a viable direct role.
          3. Cases like 2, but the claim is true. This seems to me a potential case of “better off not knowing.” Kitcher has some interesting to say about this in ST&D. Again, I think values can be a direct veto here.
          4. Cases in which there is benefit to be made in making / accepting a claim which you would be unlikely to accept on the evidence alone. This worries me for the same reason it worries Heather – the potential to make claims in a way that is manipulative or to engage in wishful thinking. At the same time, there may be borderline cases where it makes sense for values to nudge the scales, or William James style Will to Believe cases where there is no evidence to prefer one or another claim but a choice is forced and momentous.

          Let me think more about the relationship between the categories. My original thought was that they all three have both the features of showing that the indirect-only claim cannot be generally true and providing specific places where direct roles do have a role, namely:

          1. Choosing concepts.
          2. Vetoing claims with major harmful consequences.
          3. Informing the practical component of scientific reasoning.

          I see all three as pervasive aspects of inquiry.

Leave a Reply