The Foundations and Scope of the Argument from Inductive Risk: An Exchange with Joyce Havstad

Introductory note: One of the most exciting parts of my work over the last couple of years has been my collaboration with Joyce C. Havstad on the science and politics of climate science. We have a paper forthcoming in Perspectives on Science, a chapter in an edited collection responding to the “Pragmatic-Enlightened Model” of science advising that influenced WG3 of the IPCC, and another journal article under review. I made some edits to one of our papers based on the content of Heather Douglas‘s Descartes Lectures and some conversations I had with Heather around the lectures, and it prompted the following exchange of ideas about and interpretations of the Argument from Inductive Risk (AIR).

Joyce C. Havstad: I’d be interested to hear more about your updated understanding of the argument from inductive risk—especially, what the difference between the argument “not applying” and “not being salient” is. I don’t want to dispute those changes to how the scope of the argument is presented in this version of the paper, but I would like to get a better sense of what that difference signifies.

Matthew J. Brown: So, here’s how I used to understand the argument from inductive risk (simplified to the case of hypothesis acceptance):

  1. Scientists make choices about whether to accept or reject hypotheses.
  2. Evidence, logic, and epistemic values leave greater or lesser amounts of uncertainty about a hypothesis.
  3. When that uncertainty is non-negligible, we have high to set our standards of acceptance.
  4. How high we set our standards of acceptance trades off false-positive and false-negative errors, all else being equal.
  5. Sometimes there are socially/ethically significant consequences of those errors.
  6. Sometimes, those consequences can be anticipated.
  7. When (2-5) hold, we must make value judgments about standards of acceptance.

That could probably be a bit more precise, but that’s basically my understanding. Under these conditions, the AIR applies if uncertainty is non-negligible, if there are social consequences, and if they can be anticipated, and it doesn’t apply when any of those are false.

Here’s my new understanding (again, simplified), which I think is a much clearer, stronger view:

  1. Scientists make choices about whether to accept or reject hypotheses.
  2. Evidence, logic, and epistemic values tell us the strength of evidential support for a hypothesis, but there is always an inductive gap between the evidence and the hypothesis.
  3. The decision to accept, infer, assert, or endorse a (non-trivial, ampliative/inductive) hypothesis is an action that requires us to “step across” that gap.
  4. No amount or strength of support necessarily compels us to assert, infer, etc.
  5. Instead, we require some sort of practical reason (i.e., values) concerning sufficiency conditions for asserting, inferring, etc.
  6. Where there are foreseeable consequences of error, these are among the relevant practical reasons.

On this interpretation, the AIR always applies. Determination of what counts as “negligible” error is already a value-laden affair. But when the evidential support for/against a hypothesis is very strong, and there don’t seem to be foreseeable socially-relevant consequences, then the AIR is not very salient. Or perhaps it would be better to say that cognitive values, bon sens, and whimsey rather than social and ethical values are salient.

This is how I interpret Douglas’s latest & greatest presentation of the AIR. What do you think?

JCH: About the old and the new inductive risk arguments: those two arguments seem quite different to me. Most importantly, it seems to me as though they would each require very different things in the way of support.

Although I think that I can see how your prior interpretation of AIR is supported by work already done—especially, for instance, by the case detailed in Douglas’s 2000 paper on inductive risk—I’m not sure I’m aware of work that supports the updated AIR.

Premise (3) in the second argument seems particularly new and interesting, and seems to require further support. I’d also want to know about the intended scope of premises (4) and (5), and to see the support for those scoped claims.

MJB: Here are some chunks from Heather’s Descartes Lectures that I take to support the new interpretation. (Whether this is sufficient to establish the point or coheres with the prior work, I’m not entirely sure, though it coheres nicely with my own intuitions about assertion.)

To upend the value-free ideal, and its presumptions about the aim of purity and autonomy in science, one needs to tackle the ideal qua ideal at the moment of justification. This is the strength of the argument from inductive risk. It points to the inferential gap that can never be filled in an inductive argument, whenever the scientific claim does not follow deductively from the evidence (which in inductive sciences it almost never does). A scientist always needs to decide, precisely at the point of inference crucial to the value-free ideal, whether the available evidence is enough for the claim at issue. This is a gap that can never be filled, but only stepped across. The scientist must decide whether stepping across the gap is acceptable. The scientist can narrow the gap further with probability statements or error bars to hedge the claim, but the gap is never eliminated.

Note that while [epistemic values] are very helpful in assessing the strength of the available evidence, they are mute on whether the available evidence is enough, on whether the evidence is strong enough to warrant acceptance by scientists. Epistemic values do not speak to this question at all. They help to organize and assess how strong the evidence is, but not whether it is strong enough (as, recall, it will never be complete).

Social and ethical values, however, do help with this decision. They help by considering the consequences of getting it wrong, of assessing what happens if it was a mistake to step across the inductive gap—i.e, to accept a claim—or what happens if we fail to step across the inductive gap and we should. In doing so, such values help us assess whether the gap is small enough to take the chance. If making a mistake means only minor harms, we are ready to step across it with some good evidence. If making a mistake means major harms, particularly to vulnerable populations or crucial resources, we should demand more evidence. Social and ethical values weigh these risks and harms, and provide reasons for why the evidence may be sufficient in some cases and not in others.

JCH: Here’s the crux of the issue as I currently see it:

Say I’m looking at a petri dish with, as I count them, 5 nematodes in it. It is true that there will always be an inductive gap that exists here: a gap between (a) my looking at the dish and thinking I have some strong evidence using my eyes and my counting ability for thinking there are 5 nematodes in it, and (b) my making the decision that the evidence provided by my eyes and my counting ability is sufficient for me to mark down that the dish has 5 nematodes in it.

And we could say that the AIR risk always applies, even to moments like the one described above, because of the presence of the inductive gap. If we go that route, then the nematode-counting case is probably just one of those cases where making a mistake risks only very minor harms, and so we’re ready to step across the gap with just the evidence of my eyes and my counting ability. On this view, we could say that the nematode-number-marking decision is a value-laden one that requires considering not just epistemic or cognitive but also ethical and social values. But surely this decision will not require nearly the same degree of involvement of non-epistemic values, consideration of risks, engagement with stakeholders, etc. that, say, the EPA’s decision about where to set the acceptable levels of dioxin regulation did, or the IPCC’s decision to offer a set of three particular global temperature increase pathways should. Despite the AIR applying in all three cases (on this interpretation), the cases will not be ethically and socially value-laden in the same ways or to nearly the same extent.

Alternatively, we could maintain something of a distinction between the notion of an omnipresent inductive gap and the idea of inductive risk. If we go this route, then it is true that the nematode-counting case includes, as always, an inductive gap; but it is not necessarily true that the nematode-number-marking decision is an inductively risky one (again, because it is probably just one of those cases where making a mistake risks only very minor harms, in the sense that any decision ever risks very minor harms). On this view, the AIR applies only to a particular set of the decisions involving the inductive gap—for instance, those in which there are notable, foreseeable consequences of error with significant ethical and social implications. And probably also those cases which might have such implications but where the consequences are not as foreseeable (i.e., the so-called “gray areas”). Here (on this interpretation), whether and how the AIR applies tracks whether and how the relevant cases will be significantly ethically and socially value-laden.

Either way, not all cases with an inductive gap are the same with respect to their ethical and social value-ladenness. I think that I care less about being able to say that all decisions are ethically and socially value-laden (in what looks to me like a pretty trivial sense), than I do about being able to identify which decisions are significantly ethically and socially value-laden (in a discriminating and useful sense). This is because I want to be able to identify and address those extremely risky decisions which are currently being made without proper consideration of ethical and social values, but which are in dire need of them—like the EPA and the IPCC cases, but not like the nematode-counting one. To me, it is a strength of your prior interpretation of the AIR that it is able to clearly discriminate amongst cases in this way; the newer interpretation looks to be somewhat weakened along this dimension, though that may be the result of some generalization or vagueness in this [i.e., MJB’s] rough draft of the argument.

Regardless: whether we want to say that the AIR always applies, or that it is merely the inductive gap which is always present, I think that it is clear that not all decisions to cross the inductive gap are the same in terms of value-ladenness. Some are much, much riskier than others; and some require the consideration of ethical and social values to a far greater extent and perhaps even in a different kind of way than others.

What all this means is that I don’t think we can reliably infer, merely from the presence of an inductive gap, that we are in one of these situations rather than another. In other words, it’s not the inductive gap itself which carries the relevant ethical and social entailments which concern me; I care about the relevant social and ethical entailments; so the mere presence of an inductive gap does not for me a relevant case make. And (so my thinking goes), we ought not to treat it like it does.

MJB: Yes, I agree that not all decisions to cross the inductive gap are the same, in terms of value-ladenness. But is the difference between the cases primarily an epistemic question or primarily a values question? In other words, are some decisions less value-laden as such, or are the values just less significant in some cases?

I think on my old interpretation, it is natural to see the question as primarily an epistemic one. Inductive risks are a worry when risks of error are high, which requires uncertainty. Lower uncertainty, lower risk of error, less worry about IR. I think this opens up the AIR to the problems with “the lexical priority of evidence” that I raise in “Values in Science beyond Underdetermination and Inductive Risk.”

On the new interpretation, the difference is primarily an ethical one. Inductive risks are a worry when risks of error are salient, which requires social consequences to be foreseeable and significant. Stronger evidence reduces our worry about error, but only if it is strong enough. In some areas, social/ethical implications may be weak or may not exist, but we still need some kind of values to license making the inference/assertion. Maybe they’re merely pragmatic/aesthetic rather than social/ethical. (Here I’m thinking about Kent Staley‘s work on the AIR and the Higgs discovery, which shows that IR is an issue even when social and ethical values really aren’t, except maybe the about of money spent on the LHC.)

Also, I think that on this view, I think we can see why the direct/indirect roles distinction has merit but needs to be reconfigured and treated as defeasible. (But that’s a promissory note on an argument I’m trying to work out.)

I also think there is strategic value in insisting that the AIR applies everywhere, and that all the decisions in science are potentially value-laden. Scientists are too quick to dismiss potential ethical concerns and to see their work as governed mainly by technical/epistemic issues, and they are not encouraged to work very hard to foresee possible consequences of their decisions. They often don’t even realize they’re making decisions. And while the social/ethical consequences in some cases are quite obvious, there are plenty of cases where they crop up where least expected. So I’d rather have working hard to foresee the possible consequences of seemingly technical decisions be a core part of the job description, rather than thinking of it as an exceptional case. (This is partly why I’m currently focusing on moral imagination as a central concept for the values in science debate.)

JCH: I think I agree with most everything you say here, especially the part about the AIR being about not just error and uncertainty but also about risk and consequences. However, I also see both those things as being well represented in your prior interpretation; I might even find them less well represented in the new one.

Perhaps the new interpretation does more to highlight the ubiquity of the phenomenon under study. However, when the argument is glossed in that way (as it is, for instance, in your final paragraph), I have a hard time distinguishing the supposed problem of inductive risk from the plain old problem of induction.

BTW, I’ve been pondering the scope of the AIR for quite some time now, so I’m very pleased to be going back and forth on this issue with you now. At the very least I’m starting to better understand the nature of and motivation for the ubiquity claim, even if I’m not quite persuaded of it.

4 thoughts on “The Foundations and Scope of the Argument from Inductive Risk: An Exchange with Joyce Havstad

  1. Super interesting. I wonder if the focus on the ubiquity of the inductive gap, in addition to increasing the power of the insistence that non-epistemic values (ought to/inevitably) play a more pervasive role in science, could also increase focus on more indirect epistemic considerations. I’m thinking here about epistemic diversity. If it’s right that, say, it’s a good idea to have a mix of different views about a hypothesis in a scientific community (to avoid a too-quick consensus, thinking of Zollman’s work here), then in addition to “do my observations support my hypothesis” considerations, considerations about the expected distribution of credence across the relevant community could also start to matter. Perhaps the lesson of the new argument is that indirect considerations (whether epistemic or no, where indirect means something like “considerations beyond how much observations support hypothesis given background theory”) matter a lot more than generally admitted. In the cases where social consequences are particularly pressing, the indirect considerations inevitably focus on those considerations, but in circumstances where they’re not, there are still considerations going beyond the direct epistemic context (population distribution, perhaps the productivity of the hypotheses, perhaps even its potential to make a “splash”, or get funding, etc, etc) which are emphasized given the ubiquitous epistemic gap…

Leave a Reply to Matthew J. BrownCancel reply