Dispatches from Pittsburgh

Greetings from Pittsburgh, PA, somewhere on the border between the neighborhoods of Squirrel Hill and Greenfield. It is nearing the end of my first full day of a roughly 8 month adventure. I’m here for my sabbatical year and on a visiting fellowship at the Center for Philosophy of Science at the University of Pittsburgh, one of the most important institutions in the field still in operation. It’s an honor to be invited to be a visiting fellow. I’m planning to go in tomorrow morning to get acquainted with the Center, fill out paperwork, and properly start my visit.

Since arriving in Pittsburgh, I’ve done a significant amount of walking (and I hope to do a lot more). I have gone grocery shopping and to Target. I’ve figured out the transit system, more or less. I’ve cooked two meals in my rental apartment, which is seeming more homey by the hour.

My plan, while I am here, is to write a book on science & values. It is the area I’ve been working in most since I finished my dissertation, and one where I’ve slowly developed my ideas in bits and pieces in my philosophical articles over the last 7 years. I think I’m finally ready to put it all together, and I think it will take a book to do it. The book will also be informed by the work on ethical decision-making in engineering research and design that I’ve been engaged with for the past several years with my collaborators at UT Dallas.

The book is engaged primarily with the current debates about values in science, but it draws on two other influences. One is the pragmatism of John Dewey, particularly his views on the logic of inquiry, the nature of values, and the role of science in society. The other is the philosophy of science in practice, a tradition that includes (in my view) the early Thomas Kuhn, the later Paul Feyerabend, Norwood Russell Hanson, Nancy Cartwright, John Dupré, and Hasok Chang, and also closely connected with the work of, among others, Peter Galison and Bruno Latour.

The tentative title of the book is “Science and the Moral Imagination.” I’m sure I will post again about the content of the book. The basic ideas behind the project are (1) that the scientific quest for knowledge and the ethical quest for a good life and a just society are deeply interrelated pursuits, ultimately inextricable from one another; (2) that scientific inquiry involves a series of interlocking, contingent, and open choices, which can only be resolved intelligently and responsibly through a process of value judgment; and, (3) that “research ethics” or “responsible conduct of research” should be a process not merely of compliance with prior given principles or edicts, but should involve the creative projection of consequences (in the broadest sense), and evaluation of those consequences. It is this latter (clumsily expressed) point that I hope to capture with the phrase “moral imagination.” To put the point differently, I seek to explicate and defend an ideal for science according to which “seekers of knowledge” ought to “use their creativity to make the world a better place in which to live.”

What I’m reading this week: John Dewey & Moral Imagination: Pragmatism in Ethics by Steven Fesmire and Science, Values, and Democracy (Descartes Lecture Draft) by Heather Douglas.
What I’m writing: My commentary on Heather’s Lecture #1 on “Science and Values,” and my presentation for the Descartes Lectures Conference. (Why did I say I would do both??)
Other stuff I’m working on: Learning my way around Pittsburgh; establishing a routine; improving my diet and exercise; getting into the habit of blogging more.
What I’m doing for fun: Walking; reading The Waste Lands by Stephen King; meeting new people.

A question of authorship

I am trying to finish my paper on William Moulton Marston, and I am having significant difficulty deciding how to credit the scientific writings usually attributed to Marston alone. Here’s how I describe the problem in the paper:

Marston’s work and his personal relationships were deeply intertwined. Elizabeth Holloway held steady work most of her life, including a long editorial stint at Encyclopedia Britannica, supporting Marston when he was having trouble finding (and keeping) work. She was not only an inspiration and silent collaborator in much of Marston’s work; he often gave her credit. In Emotions of Normal People he reports on the results of experiments they had designed and performed together (370); elsewhere he reports that she “collaborated very largely” with him on the book (Lepore, 144). She is a credited co-author of the textbook Integrative Psychology. Olive Byrne received a master’s degree in psychology from Columbia, and she pursued but did not complete her PhD there (Lepore 124-5). Emotions of Normal People incorporated not only the research that Byrne had assisted Marston with at Tufts, but her entire master’s thesis on “The Evolution of the Theory and Research on Emotions” (Lepore 124-8). When it comes to authorship, Lepore points out:

[T]here is an extraordinary slipperiness.. in how Marston, Holloway, and Byrne credited authorship; there work is so closely tied together and their roles so overlapping that it is not difficult to determine who wrote what. This seems not to trouble any of them one bit. (ibid 127).

Thus, when examining the work of “William Moulton Marston,” it is crucial to keep in mind that said work is likely a collaborative production of (at least) Marston with Holloway or Byrne, if not both. It is tempting, then, to refer to “Marston, Holloway, and Byrne” or “Marston et al.” or “the Marstons” when describing “Marston’s” psychological contributions.

After this point, and throughout the paper, I have to discuss Marston’s record of publications, his psychological theories, his experiments, and so on. Currently, I refer to “Marston” in discussing works which list him as sole author, as well as the ideas cited in those works, and “Marston et al.” only in his one major co-authored publication (co-authored with Elizabeth Holloway Marston and C. Daly King). I’m unhappy with this approach, but also feel that doing one of the other things suggested above would be rather cumbersome.

Perhaps the fact that Marston, Holloway, and Byrne didn’t care much about it means I shouldn’t care much either. But what was expedient in their time is much more blatantly sexist in ours. Obviously, the citations in the bibliography should remain as they are, but the discussions in the text are a different story.

Duck Genitals and Feminist Science Studies


Spring 2013 saw another round of misguided right-wing attacks on basic scientific research in the U.S. Congress, a political tactic that purports to demonstrate the wastefulness of the federal government by showing off the price tag (often small in terms of scientific research budgets) for obscure research that can be described in ways that make it sound goofy or idiotic. This time around, it peaked my interest a good bit more, because it brought national media attention to one of my favorite bits of biological research: Patricia Brennan’s work on duck genitalia. (Brennan wrote a wonderful defense of her research for Slate. Even the Union of Concerned Scientists weighed in.)

Why do I love this research so much? The biology is interesting, yes (more on that in a minute), but also, as a philosopher of science with a long-standing interest in feminist science studies, I see it as following the exact structure of some of the classic cases from that literature. That is, Brennan’s work exemplifies the pattern of research of women entering a field of research dominated by men, revolutionizing and improving the methods and theories in that field. It is thus similar to the earlier cases of primatology as described by Donna Haraway—where scientists hadn’t paid much attention to the behavior if female primates and ended up with theories where their roles were entirely passive—and reproductive cell biology as described by (inter alia) Emily Martin—where the “Prince Charming/Sleeping Beauty” theory of sperm/egg fertilization was a going idea, I kid you not.

To get the basics, let’s start with this “True Facts” video by Ze Frank:

Continue reading

Excerpts from Socrates’ Journal

From recently discovered fragments, sent by Socrates to Plato in his capacity as editor of the right-wing conspiracy journal, The Dialogues:

Socrates’ journal, October 12, 399 BCE.: Dog carcass in agora this morning. Chariot tread on burst stomach. The city is afraid of me. I have seen it’s true face…

Socrates’ journal, October 13: …On Friday night, a poet died in Athens. Somebody knows why. Down there…somebody knows. The dusk reeks of unclear ideas and bad definitions. I believe I shall take my exercise.

October 21: Left Glaucon’s house at 2:35 A.M. He knows nothing about any attempt to discredit Parmenides. He has simply been used. By whom? Spartans seem obvious choice…

November 1: If reading this now, whether I am alive or dead, you will know truth. Whatever the precise nature of this conspiracy, Meletus, Anytus, and Lycon responsible. Have done best to make this legible. Believe it paints a disturbing picture. Appreciate your recent support and hope world survives long enough for this to reach you. But phalanxes are in Piraeus and writing is on wall. For my own part, regret nothing. Have lived life, free from compromise…and step into the shadow now without complaint.

Philosophy, Funding, and Conflict of Interest


A couple of weeks back, Justin Weinberg at the Daily Nous posed a really interesting question. The context was Daniel Dennett’s review of Alfred Mele’s book Free: Why Science Hasn’t Disproved Free Will. Dennett gives a relatively standard story about conflict of interest in science funding using a hypothetical story of saturated fat research funded by “the Foundation for the Advancement of Bacon.” On standard accounts, we are right to apply a higher level of scrutiny towards research whose funding displays a potential conflict of interest, and this is why, e.g., we have COI reporting requirements in certain journals and for federally funded research.

Dennett then points out that Mele’s work is funded by the John Templeton Foundation, which (simplifying a bit) has an ultimate agenda the integration of science and religion, and lately has been funding large projects that involve philosophers, scientists, and theologians working together on a shared theme, like Free Will or Character. Mele has received and managed two such grants.

Here’s Justin:

Mele’s project is not the only Templeton-funded philosophy project, nor is Templeton the only source of funds with an agenda. Dennett is claiming that funding from ideological sources casts a shadow on philosophical research in much the same way that funding from industry casts a shadow on scientific research. Is he correct?

Unfortunately, the question was lost as the thread got hijacked by a lot of nonsense, specific details about Templeton and Dennett’s neo-Atheist anti-Templeton agenda, as well as some understandable pragmatic implications of Dennett’s statements on Mele’s character. Most egregious were the many denials that conflict of interest is an issue in science, that they somehow amounted to a fallacious ad hominem argument. For instance, Terrance Tomkow and Kadri Vihvelin claim “the motives of the researcher or his employers are always beside the scientific point.” Dennett answered this point well when he said,

As for Tomkow and Vihvelin’s high-minded insistence that one is obliged “to ignore” the sponsorship of research, I wonder what planet they have been living on recently. Why do they think that researchers have adopted the policy of always declaring the sources of their funding?

Or as Richard Zach said, “It’s as if the past few decades of work on values in science didn’t happen.”

I think Justin’s original question is interesting, though, because it encourages us to think past the specific details of Mele’s book, Dennett’s critique, and the Templeton foundation. Maybe it is because I work at a STEM university, but I often hear talk that the humanities are going to have to move more towards extramural funding. For philosophers, Templeton is where the big money is, but there are also plenty of smaller private foundations, donors funding endowed Chairs (as Zara pointed out), and so on. It’s a timely question. And it is one that invites us to reflect on the similarities and differences between the sciences and philosophy (or the humanities more broadly). I wish more commenters had taken up the call.

I would suggest one analogy and one major disanalogy between science and philosophy in regards to conflict of interest. The analogy is, if I understood him right, what Dennett was getting at: funding applied on a large scale can alter, or even distort, the research agenda of a discipline. And evaluating that will require us to think about what the research agenda ought to look like.

The importance of research agendas in science is the centerpiece of Philip Kitcher’s Science, Truth, and Democracy and Science in a Democratic Society. He describes the ideal research agenda for science or a scientific discipline as well-ordered science (WOS), and he argues persuasively that not only epistemic and internal disciplinary values, but also practical and ethical values are central to determining what counts as WOS. Further, he argues that WOS should be evaluated democratically, in some sense. Because science is a social institution, it is ultimately responsible for serving the public. Kitcher also rightly recognizes the roles of funding sources and individual choices in actually setting research agendas, and argues that individual sciences have a duty to stand up and fight for change when the research agenda in their field is very far from well-ordered.

Likewise, we could ask about what “well-ordered philosophy” would look like. Presumably, many philosophers (like many scientists) would argue that notions of intrinsic intellectual/philosophical merit, strength of argument, and freedom of research should determine the ideal research agenda. I, and I suspect Kitcher as well, would prefer pragmatic, ethical, and political considerations to play a role. Either way, we can ask whether and how funding sources are moving us towards or away from a well-ordered research agenda.

Mele’s work discusses Free Will, argues that contrary to some triumphalist claims, the sciences haven’t settled the question yet, criticizes some of those claims by scientists, and is agnostic about whether free will is compatible with determinism. I’m not sure how those things fit with the ideological agenda of Templeton, though I can understand the feeling that they do, somehow. And insofar as Templeton wants to stay a major player in funding research on Free Will, we could see more of this sort of thing, less of other approaches. Zooming out to the context that Justin invites us to consider, it is worth wondering what the effects of funded research can be on the research agenda of philosophy, and it is worth deliberating about whether some funding sources should be considered a problematic conflict of interest, Templeton included. (My own view, held tentatively, is that Templeton is alright in this respect but should be closely monitored.) But also note, that until one has a sense that funding agencies are having a systematic effect, it doesn’t seem reasonable to criticize individuals in the way that Dennett does (if implicitly).

The disanalogy I would like to mention has to do with the different types of arguments that are made in empirical science and in philosophy. Philosophical arguments are usually scholarly while scientific arguments are generally technical. I mean these in a specific sense inspired by Bruno Latour’s Science in Action (N.B., these terms aren’t the ones Latour uses). To make an argument in philosophy requires nothing more than a library card, writing implements, the ability to adopt the current style of the literature in the field you wish to contribute to, and the ability to craft an argument. Scholarly arguments can be evaluated on their surface—you need only to examine the content of the text itself, and perhaps the cited sources, to understand the argument or produce a counter-argument.

Some elements of scientific texts can be evaluated in this way. But scientific arguments are also technical. In particular, much of the argument hangs on what Latour calls inscriptions—tables, charts, graphs, and figures—produced by instruments. There are hard limits to how far one can interrogate a technical text. One can raise questions about certain inferences and interpretations, and one can examine the equipment and materials that produce the data and the inscriptions, at least, as long as one has an invitation to the relevant laboratory and the patience of one’s host. But past a certain point, making an effective counter-argument requires a counter-laboratory with instruments producing inscriptions that can be used in arguments. To a large extent, the technical nature of modern science is a major source of its power and effectiveness; but a cost is that we have to rely on trust to a greater extent. And conflict of interest is at least a pro tanto reason to withhold that trust, whereas trust is not at issue in philosophical arguments in the same sense.

So while it is incorrect for Jim Griffis to say that “If the ‘science is impeccable, carefully conducted and rigorously argued’ there would be no problem with who paid for the research,” because of the technical nature of science, he is right to say that “for philosophical works, either the argument is cogent or it’s not.”

Full disclosure: I have previously applied for (but not received) Templeton funding.

Wonder Woman’s Lasso of Truth


I’ve just begun reading Jill Lepore’s new book about Wonder Woman and William Moulton Marston. So far, I’m finding it to be really thorough and excellent! [Edit: My final assessment was much more mixed.] I was a little disturbed, though, to discover that on the first page of the preface, Lepore makes a basic mistake about one of the key features of the early Wonder Woman comics:

“She had a magic lasso; anyone she roped had to tell the truth”(xi).

She repeats the point in one of the color plates in the middle of the book, which does show Wonder Woman compelling a thug to tell the truth. The accompanying text that connects this work to lie detectors and Marston’s work on deception is ultimately misleading, however.

What’s the problem here? Isn’t it called “the Lasso of Truth?” The problem, as Brian Cronin pointed out a couple of years ago at Comic Book Resources, is that this is actually anachronistic. Marston called this iconic element of Wonder Woman’s gear the “Magic Lasso” or sometimes “Golden Lasso,” not the lasso of truth. And its power has nothing specific to do with the truth, but rather with compelling obedience.


LieDetectorAIt’s a tempting connection to make Marston, after all, invented the lie detector test, or at least, he’s one of its most recognizable developers and proponents. It’s a common and tempting connection to draw:

Here’s Geoffrey Bunn, one of the few historians of psychology to write in detail about Marston:

“Anyone caught in the lasso found it impossible to lie. And because Wonder Woman used it to extract confessions and compel obedience, the golden lasso was of course nothing less than a lie detector.” (Bunn 1997, p. 108)

slaveThe real story behind Wonder Woman’s magic lasso is much more interesting and much stranger. Marston was an experimental psychologist who developed a theory of emotions. According to his theory, the four basic emotions were Dominance, Compliance, Inducement, and Submission. According to Marston, submission was a matter of giving over one’s will to a basically friendly stimulus; not only was it necessarily a pleasant emotion, but it was a necessary component of love and thus of a healthy psyche. What the magic lasso was able to do, it seems, was to place the person bound in an automatic state of submission to the will of the lasso’s wielder, making them happy to do whatever you asked. Including, occasionally, to tell the truth when they intended to deceive. Lasso2

However, more often than not, when Wonder Woman wanted to know whether someone was telling the truth, she’d make use of the very tool that Marston invented for that purpose, a lie detector test based on systolic blood pressure measurements.



Above I called the mistake an anachronism, because while Marston never used the term “Lasso of Truth,” present day comics do refer to it by that name. According to Cronin, this usage began in Wonder Woman volume 2 #2 (1987; Writer: Greg Potter, Artist: George Pérez, Editor: Karen Berger). This is the post-Crisis reboot of Wonder Woman, meaning that it occurred after the Crisis on Infinite Earths mini-series that altered the continuity of the DC Comics universe. Presumably, the creators knew about Marston’s interests in lie detection, and decided to change the powers and name of the lasso accordingly. (On the other hand, the commenters on Cronin’s piece suggests that the usage comes from the Wonder Woman TV show starring Linda Carter, so perhaps the connection was made by the creators of the show.)

UPDATE: Noah Berlatsky gets it right in his new book, Wonder Woman: Bondage and Feminism in the Marston/Peter Comics, 1941-1948.

Besides her superstrength, superspeed, superendurance, and other physical prowess, [Wonder Woman] also has bracelets that she can use to block bullets, an invisible plane, and a magic lasso that compels obedience to her commands (in later iterations, the lasso’s power is often downgraded so that it forces people to tell the truth rather than forcing them to obey any command).

Indirect and Direct Roles for Values in Science and in Ethics

[TL;DR: If a direct role for values is illegitimate it science, it is also illegitimate in any ethical or practical reasoning about what to do in particular cases, or any evaluations of the rightness or goodness of act action. The direct/indirect role distinction does not distinguish science from action.]

Those who defend or presume the value-ladenness of science are obligated to provide a response to what I call “the problem of wishful thinking,” viz., the epistemic problem of how to prevent value-laden science from leading us to believe whatever we wish, to conclude the world is the way we wish it to be, and thus to destroy the integrity and reliability of science.

One way of dealing with the problem of wishful thinking has been to restrict the type of values allowed to play a role in science to epistemic values. This is not a move most proponents of value-laden science will accept, as they are precisely concerned with the legitimacy of non-epistemic values in science. If the “epistemic” values include such familiar values as scope or simplicity of a theory, it is also insufficient to avoid the problem of wishful thinking, i.e., it may lead us to conclude that the world is simple or covered by a relatively small number of laws without any evidence to that effect.[^1]

Another important attempt to deal with the problem of wishful thinking is Heather Douglas’s introduction of the direct/indirect role distinction, and the prohibition on the use of values in the direct role in the internal processes of science. Here is how Douglas defines indirect and direct:

In the first direct role, the values act much the same way as evidence normally does, providing warrant or reasons to accept a claim. In the second, indirect role, the values do not compete with or supplant evidence, but rather determine the importance of the inductive gaps left by the evidence. More evidence usually makes the values less important in this indirect role, as uncertainty reduces. (Douglas 2009, 96).

The direct role is permissible in certain, relatively “external” decisions in science. For example, we may appeal directly to ethical or social values to defend the decision to pursue some research project over others, i.e., the decision to research improved treatments for malaria rather than improved treatments for male pattern baldness might be directly justified by the better realization of justice or alleviation of suffering of the former over the latter. Likewise, restrictions on research methods on human subjects, such as the requirement of informed consent and no unnecessary harm, should be directly justified by appeal to values, such as respect for persons and non-malfeasance.

The direct role, according to Douglas, is impermissible in internal decisions such as how to characterize data and whether or not to accept a hypothesis based on the evidence. Here, values may indirectly influence the standards of evidence, the amount or strength of evidence we require to accept or reject, but cannot tell directly for or against the hypothesis.

So, on Douglas’s account, there is a distinction to be made between practical decision-making that is directly grounded by values, and scientific inference that is directly grounded by evidence and only indirectly warranted by values. Some philosophers have questioned the clarity of this account (e.g., Elliott 2011), or its appropriateness to the epistemic tasks of scientific inference (Mitchell 2004), but that will not be my tack here. I want to start by questioning Douglas’s account of practical reasoning. I believe that the problem of wishful thinking is as much a problem for practical reasoning as for scientific inference, and that the “direct” role for values is as unacceptable in ethical decision-making as it is in scientific inference. If I’m right about this, then Douglas’s account of the structure of values in science needs to be revised, and the indirect/direct role distinction is inadequate for distinguishing between science and action or science and ethical decision-making.

Consider some very simple cases of practical decision-making.

  1. SUIT: Suppose I am out to buy a suit, and I value both affordability and quality in making such a purchase. It would be wishful thinking to assume that any suit I buy will promote these values. In order to make a decision about which suit to buy, I need to gather evidence about the available suits on which to base my decision. My values tell me what kind of evidence is relevant. But they cannot act as reasons for or against any choice of suit directly.
  2. HIRING: Suppose I am trying to decide who to hire among a number of job candidates. On the one hand, I pragmatically value hiring the person with the best skills and qualifications. On the other hand, I have an ethical/political obligation to uphold fairness and foster diversity. Neither the pragmatic nor the ethical values tell directly for or against choosing any candidate. I need to know the qualifications of particular candidates to know their qualifications. I also need to know about the theories and results of the implicit bias research to know what kinds of evidence to downplay or to keep myself unaware of while making the decision.
  3. HONESTY: Suppose I am a Kantian about lying – it is never permissible. Still, this value does not dictate on its own what speech-acts I should make and refrain from in any particular case. I must at least examine what I know or believe to be true. It would be wishful thinking to assume I was being honest with anything I was inclined to say absent information about whether or not I believed it to be the case. Perhaps I even need to examine my evidence for p before I can assert confidently that p in order to uphold this value.
  4. METHODS: Suppose I am on the IRB at my university. In order to responsibly assess the permissibility of a particular research protocol, I cannot rely directly on the principles of respect, beneficence, non-malfeasance, and justice to decide. Instead, I must carefully read the research protocol and understand what it in fact proposes to do, and I must speculate on possible consequences of the protocol, before I can evaluate the protocol and its consequences.

So, values in these cases do not act directly as reasons for or against a decision. I take it that this is in conflict with Douglas’s implied account of practical reason in Science, Policy, and the Value-Free Ideal (2009). If there is any realm in which values themselves act as grounds in inferences, it may be in pure normative theorizing, the kind that ethics do when they’re doing “normative ethical theory” or political philosophers do when they’re doing “ideal theory.” Values can only serve as direct grounds for claims about other values (if that can do that), not about actions. But these are not the kinds of activities that Douglas points at as “direct” use of values. Indeed, METHODS is just the sort of case that she uses to explain the direct role.

Values in these cases are acting indirectly to connect evidence to claims or conclusions (in particular, about how to act). Is this the same sort of indirect role that she recommends for values in science? We might think so. Just as the value of affordability tells us to look for evidence about prices in SUIT, the relative weight we place on the value of safety tells us to look for a certain kind and weight of evidence when doing a risk assessment for the toxicity of the chemical.

Douglas could revise her account to insist that scientific inference be more indirect that the cases I’ve discussed here. While the absolute value I place on not lying in HONESTY cannot directly tell me what to say in any particular situation, it does tell me what sort of evidence I need (viz. that I believe the proposition) to justify the decision to speak. Douglas could insist that in scientific cases, not only is it illegitimate for values to directly justify, e.g., accepting or rejecting a hypothesis, they also cannot directly tell us whether some evidence supports the hypothesis or is sufficient for acceptance. Rather, the only thing that can legitimately make the connection between evidence and hypothesis is something like a methodological rule, e.g., the rule that the evidence must provide a statistical significance level of p=0.01 to be sufficient for acceptance. Then the only permissible role for values would be the even more indirect role of supporting such methodological rules. The ground for that conclusion is the data itself. That data warrants the conclusion because it meets the methodological criteria (p=0.01). That criteria is appropriate in this case because of our values (the values we place on the consequences of error).

This might or might not be a reasonable way to go. But the prohibition on the direct role was justified by the need to resolve the problem of wishful thinking, and I can see why (and have assumed that) this argument is compelling. But I cannot see that the revised version is needed for resolving the problem of wishful thinking as well, and so I am not sure why the additional prohibition it involves would be warranted.

In “Values in Science beyond Underdetermination and Inductive Risk,” I argued that the main lines of argument in the science and values literature go awry in making bad assumptions about the nature of values and ethical/practical reasoning. I think this point is of a piece with that argument. I’m interested to hear what folks think about it!

[1]: This is, in my opinion, one of the most brilliant moves in (Douglas 2009).

New Online Situations

So, I am rearranging a bit my life on the web. I’ve put up a new “professional” homepage at http://matthewjbrown.net/. I’ve also got a new place to post classes at http://classes.matthewjbrown.net/, though only my current courses are up there at present.

I’ve also decided to move my posts about bourbon and whiskey over to The Whiskey Philosopher. I hope to find time to develop that further in the future.

For now, I’ll just let you wonder what http://commandlineonly.org/ is about.

I am absolute shite at WordPress theming so if anyone has any recommendations, please leave them in the comments.

John Dewey on Truth and Values in Science

This post is an abbreviated form of what I have come to think of as the most interesting part of a paper I’m working on for the volume of papers from this summer’s conference at Notre Dame on “Cognitive Attitudes and Values in Science”. For some of the background here, see my 2012 HOPOS paper, “John Dewey’s Logic of Science”.

According to Dewey in Logic: The Theory of Inquiry (1938), inquiry is the attempt to manage situations of agent-environment discordance (what he calls “indeterminate” or “problematic” situations) that interrupt the agents’ practices and activities, to restore unity, determinateness, and harmony to the situation and allow the practice and activity to continue again once impeded. The conclusion of inquiry is called “judgment.” Judgment is not just a statement of what is going on and a hypothesis of what is to be done, but it is a decision to act so as to resolve the problematicity and indeterminateness of the situation that occasioned it. In Dewey’s terms, a judgment has “direct existential consequences.”

Judgments of inquiry are thus what Dewey called “judgments of practice” (see especially the final essay in Essays in Experimental Logic, “The Logic of Judgments of Practice.” Practical judgments are about “things to do or be done, judgments of a situation demanding action”(MW 8:14). This is, by the way, Dewey’s best definition of his pragmatism: pragmatism is the hypothesis that all judgments are judgments of practice.

Dewey points out that judgments of practice have peculiar truth conditions:

Their truth or falsity is constituted by the issue. The determination of end-means… is hypothetical until the course of action indicated has been tried. The event or issue of such action is the truth or falsity of the judgment… In this case, at least, verification and truth completely coincide. (LJP, MW 8:14)

If my judgment is “I should buy this suit,” then that judgment was true if doing so worked out; if the consequences of that judgment are satisfying, they fulfill the needs that prompted buying the suit, they do not have unintended negative consequences, if I do not feel regret for my decision, then it was the right to say that I should buy it. What else could the truth of a judgment of practice involve? And indeed, there is a straightforward way in which truth of the judgment is due to correspondence—the judgment corresponded with the future consequences intended by the judgment.

From a pragmatist point of view, science is a practice, and scientific inquiry, like all inquiry, is an attempt to resolve an indeterminate situation that arises in that practice. The form of the final judgment that concludes an inquiry is what Dewey has called a “judgment of practice.” Like all practical judgments, scientific judgments are true or false according to their consequences. This is not the vulgar pragmatism that would measure the truth of a proposition according to whether the consequences of believing it are congenial. Rather, the consequences in question are tied to the consequences intended by the judgment. As all judgments involve a solution to a particular problem and a transformation of an indeterminate situation, then the truth of that judgment is determined by whether the transformation of the situation, carried out, resolves the problem and eliminates the specific indeterminacy in question.

We can thus provide the following definition of truth:

A judgment J that concludes an inquiry I is a decision to act in a certain way in order to resolve a problematic situation S that occasioned I.

J is true in S iff J resolves S, i.e., if it transforms S from an indeterminate to a determinate situation.

According to Dewey, judgment is a species of action, and indeed a species that can have serious consequences, as it tends to transform human practices and the environments in which they take place. Judgment is a decision to act in a situation in order to resolve the problem that occasioned it. It has direct existential consequences. That judgment is true (or false) in that situation insofar as it succeeds (or fails) in resolving that problem. Both judgment and truth are value-laden on this account.

Judgment is value-laden primarily due to our ordinary ethical and social responsibilities. When we decide to act, it is appropriate to hold us accountable to the appropriate norms of action. When our actions have consequences that impact our lives, we have an obligation to weight those consequences when making a decision. Judgments transform our environments and our practices. Within the limits of what can successfully resolve a problematic situation, we are obligated to make choices in accordance with our best value judgments.

Truth is likewise value-laden, for much the same reason. What counts as an adequate solution depends on what we care about. How we are sensitive to the way our practices impact on others, the environment, etc. will change whether we are able to carry on with the practice or whether it becomes indeterminate. Value judgments alter what we may regard as true. Speaking of environment, the amazing electric scooter is way fast than any other, plus it is safe for the our atmosphere and won´t let out any harmful gases.

Dewey was concerned to show that the advancement of science does not require an abandonment of social responsibility.

My hypothesis is that the standpoint and method of science do not mean the abandonment of social purpose and welfare as rightfully governing criteria in the formation of beliefs… (“The Problem of Truth” MW 6:57)

Our judgments (or our beliefs, if you prefer), are not mere attempts to mirror a static world beyond us, but are attempts to manage and change the world to render the precarious stable, the problematic straightforward, the doubtful trustworthy. Knowing and doing are intimately connected; the act of knowing modifies the thing known. We can thus only answer the question of what we know by appealing, in part, to what we care about—ethically, politically, and socially.

My NDPR Review of Wright, Explaining Science’s Success

Some of you may have already seen my review of John Wright’s Explaining Science’s Success: Understanding How Scientific Knowledge Works that appeared yesterday at NDPR. I tried to write the kind of review that PD Magnus likes to read:

It isn’t just about the book and what the author says in it. Rather, it offers a critical view of the issue and situates the book in recent discussions. It also treats the book as a bit of philosophy worthy of criticism. This contrasts with the veneer of rhetorical objectivity which bad reviews have.

I don’t know if I really succeeded. Some will surely think my review was overly dismissive. Obviously, I thought the book was Not Very Good. While there are some ideas and arguments in the book that I found interesting, what struck me most about the arguments is that they seemed so irresponsible in the light of the contemporary scene in phil sci.

Anyhow, I’d love to hear what people think of the review, especially the points I made that went beyond Wright’s book itself.