The Foundations and Scope of the Argument from Inductive Risk: An Exchange with Joyce Havstad

Featured

Introductory note: One of the most exciting parts of my work over the last couple of years has been my collaboration with Joyce C. Havstad on the science and politics of climate science. We have a paper forthcoming in Perspectives on Science, a chapter in an edited collection responding to the “Pragmatic-Enlightened Model” of science advising that influenced WG3 of the IPCC, and another journal article under review. I made some edits to one of our papers based on the content of Heather Douglas‘s Descartes Lectures and some conversations I had with Heather around the lectures, and it prompted the following exchange of ideas about and interpretations of the Argument from Inductive Risk (AIR).

Joyce C. Havstad: I’d be interested to hear more about your updated understanding of the argument from inductive risk—especially, what the difference between the argument “not applying” and “not being salient” is. I don’t want to dispute those changes to how the scope of the argument is presented in this version of the paper, but I would like to get a better sense of what that difference signifies.

Matthew J. Brown: So, here’s how I used to understand the argument from inductive risk (simplified to the case of hypothesis acceptance):

  1. Scientists make choices about whether to accept or reject hypotheses.
  2. Evidence, logic, and epistemic values leave greater or lesser amounts of uncertainty about a hypothesis.
  3. When that uncertainty is non-negligible, we have high to set our standards of acceptance.
  4. How high we set our standards of acceptance trades off false-positive and false-negative errors, all else being equal.
  5. Sometimes there are socially/ethically significant consequences of those errors.
  6. Sometimes, those consequences can be anticipated.
  7. When (2-5) hold, we must make value judgments about standards of acceptance.

That could probably be a bit more precise, but that’s basically my understanding. Under these conditions, the AIR applies if uncertainty is non-negligible, if there are social consequences, and if they can be anticipated, and it doesn’t apply when any of those are false.

Here’s my new understanding (again, simplified), which I think is a much clearer, stronger view:

  1. Scientists make choices about whether to accept or reject hypotheses.
  2. Evidence, logic, and epistemic values tell us the strength of evidential support for a hypothesis, but there is always an inductive gap between the evidence and the hypothesis.
  3. The decision to accept, infer, assert, or endorse a (non-trivial, ampliative/inductive) hypothesis is an action that requires us to “step across” that gap.
  4. No amount or strength of support necessarily compels us to assert, infer, etc.
  5. Instead, we require some sort of practical reason (i.e., values) concerning sufficiency conditions for asserting, inferring, etc.
  6. Where there are foreseeable consequences of error, these are among the relevant practical reasons.

On this interpretation, the AIR always applies. Determination of what counts as “negligible” error is already a value-laden affair. But when the evidential support for/against a hypothesis is very strong, and there don’t seem to be foreseeable socially-relevant consequences, then the AIR is not very salient. Or perhaps it would be better to say that cognitive values, bon sens, and whimsey rather than social and ethical values are salient.

This is how I interpret Douglas’s latest & greatest presentation of the AIR. What do you think?

JCH: About the old and the new inductive risk arguments: those two arguments seem quite different to me. Most importantly, it seems to me as though they would each require very different things in the way of support.

Although I think that I can see how your prior interpretation of AIR is supported by work already done—especially, for instance, by the case detailed in Douglas’s 2000 paper on inductive risk—I’m not sure I’m aware of work that supports the updated AIR.

Premise (3) in the second argument seems particularly new and interesting, and seems to require further support. I’d also want to know about the intended scope of premises (4) and (5), and to see the support for those scoped claims.

MJB: Here are some chunks from Heather’s Descartes Lectures that I take to support the new interpretation. (Whether this is sufficient to establish the point or coheres with the prior work, I’m not entirely sure, though it coheres nicely with my own intuitions about assertion.)

To upend the value-free ideal, and its presumptions about the aim of purity and autonomy in science, one needs to tackle the ideal qua ideal at the moment of justification. This is the strength of the argument from inductive risk. It points to the inferential gap that can never be filled in an inductive argument, whenever the scientific claim does not follow deductively from the evidence (which in inductive sciences it almost never does). A scientist always needs to decide, precisely at the point of inference crucial to the value-free ideal, whether the available evidence is enough for the claim at issue. This is a gap that can never be filled, but only stepped across. The scientist must decide whether stepping across the gap is acceptable. The scientist can narrow the gap further with probability statements or error bars to hedge the claim, but the gap is never eliminated.

Note that while [epistemic values] are very helpful in assessing the strength of the available evidence, they are mute on whether the available evidence is enough, on whether the evidence is strong enough to warrant acceptance by scientists. Epistemic values do not speak to this question at all. They help to organize and assess how strong the evidence is, but not whether it is strong enough (as, recall, it will never be complete).

Social and ethical values, however, do help with this decision. They help by considering the consequences of getting it wrong, of assessing what happens if it was a mistake to step across the inductive gap—i.e, to accept a claim—or what happens if we fail to step across the inductive gap and we should. In doing so, such values help us assess whether the gap is small enough to take the chance. If making a mistake means only minor harms, we are ready to step across it with some good evidence. If making a mistake means major harms, particularly to vulnerable populations or crucial resources, we should demand more evidence. Social and ethical values weigh these risks and harms, and provide reasons for why the evidence may be sufficient in some cases and not in others.

JCH: Here’s the crux of the issue as I currently see it:

Say I’m looking at a petri dish with, as I count them, 5 nematodes in it. It is true that there will always be an inductive gap that exists here: a gap between (a) my looking at the dish and thinking I have some strong evidence using my eyes and my counting ability for thinking there are 5 nematodes in it, and (b) my making the decision that the evidence provided by my eyes and my counting ability is sufficient for me to mark down that the dish has 5 nematodes in it.

And we could say that the AIR risk always applies, even to moments like the one described above, because of the presence of the inductive gap. If we go that route, then the nematode-counting case is probably just one of those cases where making a mistake risks only very minor harms, and so we’re ready to step across the gap with just the evidence of my eyes and my counting ability. On this view, we could say that the nematode-number-marking decision is a value-laden one that requires considering not just epistemic or cognitive but also ethical and social values. But surely this decision will not require nearly the same degree of involvement of non-epistemic values, consideration of risks, engagement with stakeholders, etc. that, say, the EPA’s decision about where to set the acceptable levels of dioxin regulation did, or the IPCC’s decision to offer a set of three particular global temperature increase pathways should. Despite the AIR applying in all three cases (on this interpretation), the cases will not be ethically and socially value-laden in the same ways or to nearly the same extent.

Alternatively, we could maintain something of a distinction between the notion of an omnipresent inductive gap and the idea of inductive risk. If we go this route, then it is true that the nematode-counting case includes, as always, an inductive gap; but it is not necessarily true that the nematode-number-marking decision is an inductively risky one (again, because it is probably just one of those cases where making a mistake risks only very minor harms, in the sense that any decision ever risks very minor harms). On this view, the AIR applies only to a particular set of the decisions involving the inductive gap—for instance, those in which there are notable, foreseeable consequences of error with significant ethical and social implications. And probably also those cases which might have such implications but where the consequences are not as foreseeable (i.e., the so-called “gray areas”). Here (on this interpretation), whether and how the AIR applies tracks whether and how the relevant cases will be significantly ethically and socially value-laden.

Either way, not all cases with an inductive gap are the same with respect to their ethical and social value-ladenness. I think that I care less about being able to say that all decisions are ethically and socially value-laden (in what looks to me like a pretty trivial sense), than I do about being able to identify which decisions are significantly ethically and socially value-laden (in a discriminating and useful sense). This is because I want to be able to identify and address those extremely risky decisions which are currently being made without proper consideration of ethical and social values, but which are in dire need of them—like the EPA and the IPCC cases, but not like the nematode-counting one. To me, it is a strength of your prior interpretation of the AIR that it is able to clearly discriminate amongst cases in this way; the newer interpretation looks to be somewhat weakened along this dimension, though that may be the result of some generalization or vagueness in this [i.e., MJB’s] rough draft of the argument.

Regardless: whether we want to say that the AIR always applies, or that it is merely the inductive gap which is always present, I think that it is clear that not all decisions to cross the inductive gap are the same in terms of value-ladenness. Some are much, much riskier than others; and some require the consideration of ethical and social values to a far greater extent and perhaps even in a different kind of way than others.

What all this means is that I don’t think we can reliably infer, merely from the presence of an inductive gap, that we are in one of these situations rather than another. In other words, it’s not the inductive gap itself which carries the relevant ethical and social entailments which concern me; I care about the relevant social and ethical entailments; so the mere presence of an inductive gap does not for me a relevant case make. And (so my thinking goes), we ought not to treat it like it does.

MJB: Yes, I agree that not all decisions to cross the inductive gap are the same, in terms of value-ladenness. But is the difference between the cases primarily an epistemic question or primarily a values question? In other words, are some decisions less value-laden as such, or are the values just less significant in some cases?

I think on my old interpretation, it is natural to see the question as primarily an epistemic one. Inductive risks are a worry when risks of error are high, which requires uncertainty. Lower uncertainty, lower risk of error, less worry about IR. I think this opens up the AIR to the problems with “the lexical priority of evidence” that I raise in “Values in Science beyond Underdetermination and Inductive Risk.”

On the new interpretation, the difference is primarily an ethical one. Inductive risks are a worry when risks of error are salient, which requires social consequences to be foreseeable and significant. Stronger evidence reduces our worry about error, but only if it is strong enough. In some areas, social/ethical implications may be weak or may not exist, but we still need some kind of values to license making the inference/assertion. Maybe they’re merely pragmatic/aesthetic rather than social/ethical. (Here I’m thinking about Kent Staley‘s work on the AIR and the Higgs discovery, which shows that IR is an issue even when social and ethical values really aren’t, except maybe the about of money spent on the LHC.)

Also, I think that on this view, I think we can see why the direct/indirect roles distinction has merit but needs to be reconfigured and treated as defeasible. (But that’s a promissory note on an argument I’m trying to work out.)

I also think there is strategic value in insisting that the AIR applies everywhere, and that all the decisions in science are potentially value-laden. Scientists are too quick to dismiss potential ethical concerns and to see their work as governed mainly by technical/epistemic issues, and they are not encouraged to work very hard to foresee possible consequences of their decisions. They often don’t even realize they’re making decisions. And while the social/ethical consequences in some cases are quite obvious, there are plenty of cases where they crop up where least expected. So I’d rather have working hard to foresee the possible consequences of seemingly technical decisions be a core part of the job description, rather than thinking of it as an exceptional case. (This is partly why I’m currently focusing on moral imagination as a central concept for the values in science debate.)

JCH: I think I agree with most everything you say here, especially the part about the AIR being about not just error and uncertainty but also about risk and consequences. However, I also see both those things as being well represented in your prior interpretation; I might even find them less well represented in the new one.

Perhaps the new interpretation does more to highlight the ubiquity of the phenomenon under study. However, when the argument is glossed in that way (as it is, for instance, in your final paragraph), I have a hard time distinguishing the supposed problem of inductive risk from the plain old problem of induction.

BTW, I’ve been pondering the scope of the AIR for quite some time now, so I’m very pleased to be going back and forth on this issue with you now. At the very least I’m starting to better understand the nature of and motivation for the ubiquity claim, even if I’m not quite persuaded of it.

Duck Genitals and Feminist Science Studies

Featured

Spring 2013 saw another round of misguided right-wing attacks on basic scientific research in the U.S. Congress, a political tactic that purports to demonstrate the wastefulness of the federal government by showing off the price tag (often small in terms of scientific research budgets) for obscure research that can be described in ways that make it sound goofy or idiotic. This time around, it peaked my interest a good bit more, because it brought national media attention to one of my favorite bits of biological research: Patricia Brennan’s work on duck genitalia. (Brennan wrote a wonderful defense of her research for Slate. Even the Union of Concerned Scientists weighed in.)

Why do I love this research so much? The biology is interesting, yes (more on that in a minute), but also, as a philosopher of science with a long-standing interest in feminist science studies, I see it as following the exact structure of some of the classic cases from that literature. That is, Brennan’s work exemplifies the pattern of research of women entering a field of research dominated by men, revolutionizing and improving the methods and theories in that field. It is thus similar to the earlier cases of primatology as described by Donna Haraway—where scientists hadn’t paid much attention to the behavior if female primates and ended up with theories where their roles were entirely passive—and reproductive cell biology as described by (inter alia) Emily Martin—where the “Prince Charming/Sleeping Beauty” theory of sperm/egg fertilization was a going idea, I kid you not.

To get the basics, let’s start with this “True Facts” video by Ze Frank:

Continue reading

Philosophy, Funding, and Conflict of Interest

Featured

A couple of weeks back, Justin Weinberg at the Daily Nous posed a really interesting question. The context was Daniel Dennett’s review of Alfred Mele’s book Free: Why Science Hasn’t Disproved Free Will. Dennett gives a relatively standard story about conflict of interest in science funding using a hypothetical story of saturated fat research funded by “the Foundation for the Advancement of Bacon.” On standard accounts, we are right to apply a higher level of scrutiny towards research whose funding displays a potential conflict of interest, and this is why, e.g., we have COI reporting requirements in certain journals and for federally funded research.

Dennett then points out that Mele’s work is funded by the John Templeton Foundation, which (simplifying a bit) has an ultimate agenda the integration of science and religion, and lately has been funding large projects that involve philosophers, scientists, and theologians working together on a shared theme, like Free Will or Character. Mele has received and managed two such grants.

Here’s Justin:

Mele’s project is not the only Templeton-funded philosophy project, nor is Templeton the only source of funds with an agenda. Dennett is claiming that funding from ideological sources casts a shadow on philosophical research in much the same way that funding from industry casts a shadow on scientific research. Is he correct?

Unfortunately, the question was lost as the thread got hijacked by a lot of nonsense, specific details about Templeton and Dennett’s neo-Atheist anti-Templeton agenda, as well as some understandable pragmatic implications of Dennett’s statements on Mele’s character. Most egregious were the many denials that conflict of interest is an issue in science, that they somehow amounted to a fallacious ad hominem argument. For instance, Terrance Tomkow and Kadri Vihvelin claim “the motives of the researcher or his employers are always beside the scientific point.” Dennett answered this point well when he said,

As for Tomkow and Vihvelin’s high-minded insistence that one is obliged “to ignore” the sponsorship of research, I wonder what planet they have been living on recently. Why do they think that researchers have adopted the policy of always declaring the sources of their funding?

Or as Richard Zach said, “It’s as if the past few decades of work on values in science didn’t happen.”

I think Justin’s original question is interesting, though, because it encourages us to think past the specific details of Mele’s book, Dennett’s critique, and the Templeton foundation. Maybe it is because I work at a STEM university, but I often hear talk that the humanities are going to have to move more towards extramural funding. For philosophers, Templeton is where the big money is, but there are also plenty of smaller private foundations, donors funding endowed Chairs (as Zara pointed out), and so on. It’s a timely question. And it is one that invites us to reflect on the similarities and differences between the sciences and philosophy (or the humanities more broadly). I wish more commenters had taken up the call.

I would suggest one analogy and one major disanalogy between science and philosophy in regards to conflict of interest. The analogy is, if I understood him right, what Dennett was getting at: funding applied on a large scale can alter, or even distort, the research agenda of a discipline. And evaluating that will require us to think about what the research agenda ought to look like.

The importance of research agendas in science is the centerpiece of Philip Kitcher’s Science, Truth, and Democracy and Science in a Democratic Society. He describes the ideal research agenda for science or a scientific discipline as well-ordered science (WOS), and he argues persuasively that not only epistemic and internal disciplinary values, but also practical and ethical values are central to determining what counts as WOS. Further, he argues that WOS should be evaluated democratically, in some sense. Because science is a social institution, it is ultimately responsible for serving the public. Kitcher also rightly recognizes the roles of funding sources and individual choices in actually setting research agendas, and argues that individual sciences have a duty to stand up and fight for change when the research agenda in their field is very far from well-ordered.

Likewise, we could ask about what “well-ordered philosophy” would look like. Presumably, many philosophers (like many scientists) would argue that notions of intrinsic intellectual/philosophical merit, strength of argument, and freedom of research should determine the ideal research agenda. I, and I suspect Kitcher as well, would prefer pragmatic, ethical, and political considerations to play a role. Either way, we can ask whether and how funding sources are moving us towards or away from a well-ordered research agenda.

Mele’s work discusses Free Will, argues that contrary to some triumphalist claims, the sciences haven’t settled the question yet, criticizes some of those claims by scientists, and is agnostic about whether free will is compatible with determinism. I’m not sure how those things fit with the ideological agenda of Templeton, though I can understand the feeling that they do, somehow. And insofar as Templeton wants to stay a major player in funding research on Free Will, we could see more of this sort of thing, less of other approaches. Zooming out to the context that Justin invites us to consider, it is worth wondering what the effects of funded research can be on the research agenda of philosophy, and it is worth deliberating about whether some funding sources should be considered a problematic conflict of interest, Templeton included. (My own view, held tentatively, is that Templeton is alright in this respect but should be closely monitored.) But also note, that until one has a sense that funding agencies are having a systematic effect, it doesn’t seem reasonable to criticize individuals in the way that Dennett does (if implicitly).

The disanalogy I would like to mention has to do with the different types of arguments that are made in empirical science and in philosophy. Philosophical arguments are usually scholarly while scientific arguments are generally technical. I mean these in a specific sense inspired by Bruno Latour’s Science in Action (N.B., these terms aren’t the ones Latour uses). To make an argument in philosophy requires nothing more than a library card, writing implements, the ability to adopt the current style of the literature in the field you wish to contribute to, and the ability to craft an argument. Scholarly arguments can be evaluated on their surface—you need only to examine the content of the text itself, and perhaps the cited sources, to understand the argument or produce a counter-argument.

Some elements of scientific texts can be evaluated in this way. But scientific arguments are also technical. In particular, much of the argument hangs on what Latour calls inscriptions—tables, charts, graphs, and figures—produced by instruments. There are hard limits to how far one can interrogate a technical text. One can raise questions about certain inferences and interpretations, and one can examine the equipment and materials that produce the data and the inscriptions, at least, as long as one has an invitation to the relevant laboratory and the patience of one’s host. But past a certain point, making an effective counter-argument requires a counter-laboratory with instruments producing inscriptions that can be used in arguments. To a large extent, the technical nature of modern science is a major source of its power and effectiveness; but a cost is that we have to rely on trust to a greater extent. And conflict of interest is at least a pro tanto reason to withhold that trust, whereas trust is not at issue in philosophical arguments in the same sense.

So while it is incorrect for Jim Griffis to say that “If the ‘science is impeccable, carefully conducted and rigorously argued’ there would be no problem with who paid for the research,” because of the technical nature of science, he is right to say that “for philosophical works, either the argument is cogent or it’s not.”

Full disclosure: I have previously applied for (but not received) Templeton funding.

Wonder Woman’s Lasso of Truth

Featured

I’ve just begun reading Jill Lepore’s new book about Wonder Woman and William Moulton Marston. So far, I’m finding it to be really thorough and excellent! I was a little disturbed, though, to discover that on the first page of the preface, Lepore makes a basic mistake about one of the key features of the early Wonder Woman comics:

“She had a magic lasso; anyone she roped had to tell the truth”(xi).

She repeats the point in one of the color plates in the middle of the book, which does show Wonder Woman compelling a thug to tell the truth. The accompanying text that connects this work to lie detectors and Marston’s work on deception is ultimately misleading, however.

What’s the problem here? Isn’t it called “the Lasso of Truth?” The problem, as Brian Cronin pointed out a couple of years ago at Comic Book Resources, is that this is actually anachronistic. Marston called this iconic element of Wonder Woman’s gear the “Magic Lasso” or sometimes “Golden Lasso,” not the lasso of truth. And its power has nothing specific to do with the truth, but rather with compelling obedience.

Lasso1

LieDetectorAIt’s a tempting connection to make Marston, after all, invented the lie detector test, or at least, he’s one of its most recognizable developers and proponents. It’s a common and tempting connection to draw:

Here’s Geoffrey Bunn, one of the few historians of psychology to write in detail about Marston:

“Anyone caught in the lasso found it impossible to lie. And because Wonder Woman used it to extract confessions and compel obedience, the golden lasso was of course nothing less than a lie detector.” (Bunn 1997, p. 108)

slaveThe real story behind Wonder Woman’s magic lasso is much more interesting and much stranger. Marston was an experimental psychologist who developed a theory of emotions. According to his theory, the four basic emotions were Dominance, Compliance, Inducement, and Submission. According to Marston, submission was a matter of giving over one’s will to a basically friendly stimulus; not only was it necessarily a pleasant emotion, but it was a necessary component of love and thus of a healthy psyche. What the magic lasso was able to do, it seems, was to place the person bound in an automatic state of submission to the will of the lasso’s wielder, making them happy to do whatever you asked. Including, occasionally, to tell the truth when they intended to deceive. Lasso2

However, more often than not, when Wonder Woman wanted to know whether someone was telling the truth, she’d make use of the very tool that Marston invented for that purpose, a lie detector test based on systolic blood pressure measurements.

LieDetector1

 

Above I called the mistake an anachronism, because while Marston never used the term “Lasso of Truth,” present day comics do refer to it by that name. According to Cronin, this usage began in Wonder Woman volume 2 #2 (1987; Writer: Greg Potter, Artist: George Pérez, Editor: Karen Berger). This is the post-Crisis reboot of Wonder Woman, meaning that it occurred after the Crisis on Infinite Earths mini-series that altered the continuity of the DC Comics universe. Presumably, the creators knew about Marston’s interests in lie detection, and decided to change the powers and name of the lasso accordingly. (On the other hand, the commenters on Cronin’s piece suggests that the usage comes from the Wonder Woman TV show starring Linda Carter, so perhaps the connection was made by the creators of the show.)

UPDATE: Noah Berlatsky gets it right in his new book, Wonder Woman: Bondage and Feminism in the Marston/Peter Comics, 1941-1948.

Besides her superstrength, superspeed, superendurance, and other physical prowess, [Wonder Woman] also has bracelets that she can use to block bullets, an invisible plane, and a magic lasso that compels obedience to her commands (in later iterations, the lasso’s power is often downgraded so that it forces people to tell the truth rather than forcing them to obey any command).

What is “The Hanged Man”?

Featured

“The Hanged Man” was an online Synchronet BBS Home Screen alias or “handle” I adopted somewhere around 1994, when I didn’t even have access to the internet and instead was using local dialup bulletin board systems (BBS’s). I continued to use the name on into the next millennium, when I started a webpage and got an email address. I’ve continued to use it into the present mostly out of inertia. It’s also a fairly memorable handle, and I’ve had the web domain long enough not to want to give it up.

Continue reading

More Dispatches from Pittsburgh

What I’m reading this week: Morality for Humans by Mark Johnson, another broadly Deweyan account of moral deliberation centering moral imagination.
What I’m writing: An overview/plan of my book project and a talk based on it.
Other stuff I’m working on: Learning the ropes of Treasurer for HOPOS (why did I agree to this??); Anjan Chakravartty on scientific realism for our weekly reading group.
What I’m doing for fun: Fun???

I’m keeping very busy here in Pittsburgh, partly because I am not spending enough time here. I just got back from an unplanned trip to Dallas (nothing to worry about), and I’m going back across the Atlantic next week to give a talk.

Joyce Havstad and I have been having an interesting exchange over how to interpret the Argument from Inductive Risk (AIR), based on what Heather Douglas said in her Descartes Lectures. It’s been very helpful for me. Joyce is a delight to collaborate with, even when we’re butting heads on something. I hope to clean that exchange up and post it here on the blog tonight or tomorrow.

Pittsburgh is very hilly, though I’m getting to where I can get around more places without getting winded. I think I’ve pretty much got the ropes of the public transit system. I’m enjoying being around the Center for Philosophy of Science, though I think I’ll gel with the group more when I don’t have so much traveling to do.

Notes on Fesmire’s John Dewey and Moral Imagination

I enjoyed this book and learned a lot from it. Fesmire reads Dewey together with Martha Nussbaum on Aristotelian practical wisdom and emotion in ethics, and Lakoff on Johnson on embodied metaphor in cognitive semantics, to set forward an account of moral deliberation in which moral imagination plays a central role. (Havelock Ellis, Alisdair MacIntyre, and James T. Farrell all play supporting roles as well.) Typical of much Dewey scholarship, Fesmire’s approach is to think-with Dewey about the topic of moral imagination, rather than to simply provide an interpretation of Dewey’s work. The approach has costs and benefits, but for my purposes, it was a useful one.

The brief introduction starts with a silly quote from Havelock Ellis about “The academic philosophers of ethics” and their “slavery to rigid formulas” being “the death of all high moral responsibility.” It proceeds to identify a shift in the center of gravity of ethics, exemplified by the work of Nussbaum, MacIntyre, Nel Noddings, Bernard Williams, Charles Taylor, Owen Flanagan, and Mark Johnson. The shift is away from ultimate moral criteria towards practical wisdom, character, narrative, caring, moral luck, pluralism, and psychological realism. Rules and principles have a role to play in ethics, but not as ultimate criteria or as decision-procedures.

The book is divided into two parts. Part I consists of three chapters and reviews pragmatist philosophy of mind, moral psychology, and epistemology. The focus is on character, habit, belief, reason, and intelligence. A nice summary from Chapter 2:

Classical pragmatism situates reason within the broad context of the whole person in action. It replaces beliefs-as-intellectual-abstractions with beliefs-as-tendencies-to-act, pure reason with practical inquiry, and objectivist rationality with imaginative situational intelligence. (p. 28)

Bain, Peirce, and James play a big role in this first part along with Dewey. Given her focus on ethics and social issues, I wish that Jane Addams had played an equally central role, but she’s rarely given adequate treatment. Nothing in this section will surprise those familiar with classical pragmatism and the way this diverse cast of characters are usually put together into a unified (Dewey-centric) narrative, though Fesmire’s presentation is helpfully clear and concise.

Part II is the account of moral imagination in the context of pragmatist ethics. The first chapter provides the context for pragmatist (Deweyan) ethics more broadly. Some of the key features are: pluralism of ethical principles, factors, or values / value-types; moral deliberation as reconciliation or integration of these often-conflicting factors through inquiry; the value of moral ethical rules or principles is in making salient these independent factors or values. This chapter also introduces the two key ways that imagination plays a role in moral deliberation: empathetic projection as the imaginative adoption of values, perspectives, and attitudes of others, and creatively tapping a situation’s possibilities by imaginatively exploring different aspects of the situation and the dramatically rehearsing the possible courses of action they afford. (The latter, Fesmire holds, is Dewey’s main focus.) This kind of imagination is the ability “to see the actual in the light of the possible” (p. 67, quoting Alexander).

Chapter 5 discusses the role of imagination conceived as dramatic rehearsal in moral deliberation. (For Dewey, it is so central that sometimes he just refers to moral deliberation as “dramatic rehearsal.”) Dewey thinks of moral deliberation as a kind of problem-solving inquiry, where the conflict arises from the conflicts between currently held values in particular situations. For Dewey, deliberation or inquiry requires that rather than just acting in the face of a problem, we step back and withhold immediate action, channeling our conflicting impulses into dramatic rehearsal of possible courses of action. Exploring these possibilities through careful examination of the facts of the situation, bringing prior knowledge to bear, along with dramatic rehearsal is what intelligent moral deliberation requires; and finally, action is treated as an experimental test of the chosen hypothesis, whose success or failure will modify future conduct. Finally, Fesmire incorporates George Lakoff’s and Mark Johnson’s work in cognitive semantics to argue that the imaginative process depends heavily on metaphor, and these metaphors are in fact central to our cognitive and linguistic machinery. These metaphors are, of course, embodied, in a way that fits well with Dewey’s emphasis as organism-environment-culture interaction as the scene of human mind.

The last two chapters constitute an extended exploration of the metaphor of “moral deliberation as art.” I was pretty skeptical of the value of this analogy at first, but I was convinced of its utility. One value of the metaphor is that it can help overcome the more dominant metaphor of morality as accounting, according to which well-being is wealth, duty is debt, and moral deeds are transactions. Another valuable feature of the metaphor is that it centers the importance of perceptiveness, creativity, skill, and the response of the Other (the audience) in moral deliberation.

I found myself disappointed in only a few ways. First, there was not enough attention to Dewey’s central distinction between what he calls valuing/evaluation, prizing/appraisal, satisfying/satisfactory, or desires/value judgments. It is there (notably on pp. 96-7), but it doesn’t play a huge role. Second, there was very little discussion of the relationship between science and ethics. Third, there wasn’t much engagement with other contemporary theorists of moral deliberation or practical ethics, besides those (like Nussbaum) who are clearly working a similar vein of thought as Dewey.

Descartes Lectures – Day 3 (in Tweets)

See my tweets summarizing Day 1 and Day 2

I think I went kind of crazy in the amount of tweeting I did today. But I don’t see how to edit it down for this purpose, so here you go. Again, I’ve included some of the talkback, even though it wasn’t all realtime, and some other live-tweeters.

Heather’s Lecture

Typo! That should be “science literacy” not “science literally.”

Commentaries

The explanation here was complicated, but what she was saying was that it is really possible to have the evidence presented in such a way, without all the little bits, e.g., the way that problems for the account of anthropogenic climate change arose, and were responded to.

Afternoon Sessions

Final Panel Discussion

Descartes Lectures – Day 2 (in Tweets)

See my tweets summarizing Day 1 and Day 3

We had another great day of the Descartes Lectures & Conference on “Science, Values, and Democracy” yesterday. Today generated a little more discussion on Twitter, which I inserted, out of chronological order.

Heather’s Lecture

(As I referenced above, we got to the root of this at our discussion at dinner.)

Commentaries

Afternoon Sessions!

My paper was next. Here is the main upshot of my paper:

Ideal of Moral Imagination: Encouraging scientists to recognize decision-point, creatively explore possible choices, empathetically recognize potential stakeholders, and discover morally salient aspects and consequences of the decision via dramatic rehearsal.

After that, I was moderating, and so I didn’t Tweet. But I had to add this:

On to Day 3!

Descartes Lectures – Day 1 (in Tweets)

See my tweets summarizing Day 2 and Day 3

Here’s what happened at Heather Douglas‘s Descartes Lectures & the associated conference, today. Or at least, what I Tweeted about it.

Preliminary Stuff

Heather’s Lecture: “Science & Values: The Pervasive Entanglement

Commentary on Douglas’s First Lecture

Didn’t tweet anything else about that, because I was giving the commentary! 😉

Q&A

Afternoon Sessions

It turns out, by the way, that I was wrong about this. “Value-neutral” means something related, but different. There’s no place in Douglas’s view for what Thomas was talking about… but he still wasn’t talking about “value-free expertise”!!

I really like this point.

In part, I think this is because my energy and attention span was really waning.

I won’t post the rest of my Tweets about Alessandra’s talk, because they weren’t very good, due to exhaustion.

Some encouragement

On to Day 2!

Dispatches from Pittsburgh

Greetings from Pittsburgh, PA, somewhere on the border between the neighborhoods of Squirrel Hill and Greenfield. It is nearing the end of my first full day of a roughly 8 month adventure. I’m here for my sabbatical year and on a visiting fellowship at the Center for Philosophy of Science at the University of Pittsburgh, one of the most important institutions in the field still in operation. It’s an honor to be invited to be a visiting fellow. I’m planning to go in tomorrow morning to get acquainted with the Center, fill out paperwork, and properly start my visit.

Since arriving in Pittsburgh, I’ve done a significant amount of walking (and I hope to do a lot more). I have gone grocery shopping and to Target. I’ve figured out the transit system, more or less. I’ve cooked two meals in my rental apartment, which is seeming more homey by the hour.

My plan, while I am here, is to write a book on science & values. It is the area I’ve been working in most since I finished my dissertation, and one where I’ve slowly developed my ideas in bits and pieces in my philosophical articles over the last 7 years. I think I’m finally ready to put it all together, and I think it will take a book to do it. The book will also be informed by the work on ethical decision-making in engineering research and design that I’ve been engaged with for the past several years with my collaborators at UT Dallas.

The book is engaged primarily with the current debates about values in science, but it draws on two other influences. One is the pragmatism of John Dewey, particularly his views on the logic of inquiry, the nature of values, and the role of science in society. The other is the philosophy of science in practice, a tradition that includes (in my view) the early Thomas Kuhn, the later Paul Feyerabend, Norwood Russell Hanson, Nancy Cartwright, John Dupré, and Hasok Chang, and also closely connected with the work of, among others, Peter Galison and Bruno Latour.

The tentative title of the book is “Science and the Moral Imagination.” I’m sure I will post again about the content of the book. The basic ideas behind the project are (1) that the scientific quest for knowledge and the ethical quest for a good life and a just society are deeply interrelated pursuits, ultimately inextricable from one another; (2) that scientific inquiry involves a series of interlocking, contingent, and open choices, which can only be resolved intelligently and responsibly through a process of value judgment; and, (3) that “research ethics” or “responsible conduct of research” should be a process not merely of compliance with prior given principles or edicts, but should involve the creative projection of consequences (in the broadest sense), and evaluation of those consequences. It is this latter (clumsily expressed) point that I hope to capture with the phrase “moral imagination.” To put the point differently, I seek to explicate and defend an ideal for science according to which “seekers of knowledge” ought to “use their creativity to make the world a better place in which to live.”

What I’m reading this week: John Dewey & Moral Imagination: Pragmatism in Ethics by Steven Fesmire and Science, Values, and Democracy (Descartes Lecture Draft) by Heather Douglas.
What I’m writing: My commentary on Heather’s Lecture #1 on “Science and Values,” and my presentation for the Descartes Lectures Conference. (Why did I say I would do both??)
Other stuff I’m working on: Learning my way around Pittsburgh; establishing a routine; improving my diet and exercise; getting into the habit of blogging more.
What I’m doing for fun: Walking; reading The Waste Lands by Stephen King; meeting new people.

A question of authorship

I am trying to finish my paper on William Moulton Marston, and I am having significant difficulty deciding how to credit the scientific writings usually attributed to Marston alone. Here’s how I describe the problem in the paper:

Marston’s work and his personal relationships were deeply intertwined. Elizabeth Holloway held steady work most of her life, including a long editorial stint at Encyclopedia Britannica, supporting Marston when he was having trouble finding (and keeping) work. She was not only an inspiration and silent collaborator in much of Marston’s work; he often gave her credit. In Emotions of Normal People he reports on the results of experiments they had designed and performed together (370); elsewhere he reports that she “collaborated very largely” with him on the book (Lepore, 144). She is a credited co-author of the textbook Integrative Psychology. Olive Byrne received a master’s degree in psychology from Columbia, and she pursued but did not complete her PhD there (Lepore 124-5). Emotions of Normal People incorporated not only the research that Byrne had assisted Marston with at Tufts, but her entire master’s thesis on “The Evolution of the Theory and Research on Emotions” (Lepore 124-8). When it comes to authorship, Lepore points out:

[T]here is an extraordinary slipperiness.. in how Marston, Holloway, and Byrne credited authorship; there work is so closely tied together and their roles so overlapping that it is not difficult to determine who wrote what. This seems not to trouble any of them one bit. (ibid 127).

Thus, when examining the work of “William Moulton Marston,” it is crucial to keep in mind that said work is likely a collaborative production of (at least) Marston with Holloway or Byrne, if not both. It is tempting, then, to refer to “Marston, Holloway, and Byrne” or “Marston et al.” or “the Marstons” when describing “Marston’s” psychological contributions.

After this point, and throughout the paper, I have to discuss Marston’s record of publications, his psychological theories, his experiments, and so on. Currently, I refer to “Marston” in discussing works which list him as sole author, as well as the ideas cited in those works, and “Marston et al.” only in his one major co-authored publication (co-authored with Elizabeth Holloway Marston and C. Daly King). I’m unhappy with this approach, but also feel that doing one of the other things suggested above would be rather cumbersome.

Perhaps the fact that Marston, Holloway, and Byrne didn’t care much about it means I shouldn’t care much either. But what was expedient in their time is much more blatantly sexist in ours. Obviously, the citations in the bibliography should remain as they are, but the discussions in the text are a different story.

Excerpts from Socrates’ Journal

From recently discovered fragments, sent by Socrates to Plato in his capacity as editor of the right-wing conspiracy journal, The Dialogues:

Socrates’ journal, October 12, 399 BCE.: Dog carcass in agora this morning. Chariot tread on burst stomach. The city is afraid of me. I have seen it’s true face…

Socrates’ journal, October 13: …On Friday night, a poet died in Athens. Somebody knows why. Down there…somebody knows. The dusk reeks of unclear ideas and bad definitions. I believe I shall take my exercise.

October 21: Left Glaucon’s house at 2:35 A.M. He knows nothing about any attempt to discredit Parmenides. He has simply been used. By whom? Spartans seem obvious choice…

November 1: If reading this now, whether I am alive or dead, you will know truth. Whatever the precise nature of this conspiracy, Meletus, Anytus, and Lycon responsible. Have done best to make this legible. Believe it paints a disturbing picture. Appreciate your recent support and hope world survives long enough for this to reach you. But phalanxes are in Piraeus and writing is on wall. For my own part, regret nothing. Have lived life, free from compromise…and step into the shadow now without complaint.