Duck Genitals and Feminist Science Studies


Spring 2013 saw another round of misguided right-wing attacks on basic scientific research in the U.S. Congress, a political tactic that purports to demonstrate the wastefulness of the federal government by showing off the price tag (often small in terms of scientific research budgets) for obscure research that can be described in ways that make it sound goofy or idiotic. This time around, it peaked my interest a good bit more, because it brought national media attention to one of my favorite bits of biological research: Patricia Brennan’s work on duck genitalia. (Brennan wrote a wonderful defense of her research for Slate. Even the Union of Concerned Scientists weighed in.)

Why do I love this research so much? The biology is interesting, yes (more on that in a minute), but also, as a philosopher of science with a long-standing interest in feminist science studies, I see it as following the exact structure of some of the classic cases from that literature. That is, Brennan’s work exemplifies the pattern of research of women entering a field of research dominated by men, revolutionizing and improving the methods and theories in that field. It is thus similar to the earlier cases of primatology as described by Donna Haraway—where scientists hadn’t paid much attention to the behavior if female primates and ended up with theories where their roles were entirely passive—and reproductive cell biology as described by (inter alia) Emily Martin—where the “Prince Charming/Sleeping Beauty” theory of sperm/egg fertilization was a going idea, I kid you not.

To get the basics, let’s start with this “True Facts” video by Ze Frank:

Continue reading

Excerpts from Socrates’ Journal

From recently discovered fragments, sent by Socrates to Plato in his capacity as editor of the right-wing conspiracy journal, The Dialogues:

Socrates’ journal, October 12, 399 BCE.: Dog carcass in agora this morning. Chariot tread on burst stomach. The city is afraid of me. I have seen it’s true face…

Socrates’ journal, October 13: …On Friday night, a poet died in Athens. Somebody knows why. Down there…somebody knows. The dusk reeks of unclear ideas and bad definitions. I believe I shall take my exercise.

October 21: Left Glaucon’s house at 2:35 A.M. He knows nothing about any attempt to discredit Parmenides. He has simply been used. By whom? Spartans seem obvious choice…

November 1: If reading this now, whether I am alive or dead, you will know truth. Whatever the precise nature of this conspiracy, Meletus, Anytus, and Lycon responsible. Have done best to make this legible. Believe it paints a disturbing picture. Appreciate your recent support and hope world survives long enough for this to reach you. But phalanxes are in Piraeus and writing is on wall. For my own part, regret nothing. Have lived life, free from compromise…and step into the shadow now without complaint.

Philosophy, Funding, and Conflict of Interest


A couple of weeks back, Justin Weinberg at the Daily Nous posed a really interesting question. The context was Daniel Dennett’s review of Alfred Mele’s book Free: Why Science Hasn’t Disproved Free Will. Dennett gives a relatively standard story about conflict of interest in science funding using a hypothetical story of saturated fat research funded by “the Foundation for the Advancement of Bacon.” On standard accounts, we are right to apply a higher level of scrutiny towards research whose funding displays a potential conflict of interest, and this is why, e.g., we have COI reporting requirements in certain journals and for federally funded research.

Dennett then points out that Mele’s work is funded by the John Templeton Foundation, which (simplifying a bit) has an ultimate agenda the integration of science and religion, and lately has been funding large projects that involve philosophers, scientists, and theologians working together on a shared theme, like Free Will or Character. Mele has received and managed two such grants.

Here’s Justin:

Mele’s project is not the only Templeton-funded philosophy project, nor is Templeton the only source of funds with an agenda. Dennett is claiming that funding from ideological sources casts a shadow on philosophical research in much the same way that funding from industry casts a shadow on scientific research. Is he correct?

Unfortunately, the question was lost as the thread got hijacked by a lot of nonsense, specific details about Templeton and Dennett’s neo-Atheist anti-Templeton agenda, as well as some understandable pragmatic implications of Dennett’s statements on Mele’s character. Most egregious were the many denials that conflict of interest is an issue in science, that they somehow amounted to a fallacious ad hominem argument. For instance, Terrance Tomkow and Kadri Vihvelin claim “the motives of the researcher or his employers are always beside the scientific point.” Dennett answered this point well when he said,

As for Tomkow and Vihvelin’s high-minded insistence that one is obliged “to ignore” the sponsorship of research, I wonder what planet they have been living on recently. Why do they think that researchers have adopted the policy of always declaring the sources of their funding?

Or as Richard Zach said, “It’s as if the past few decades of work on values in science didn’t happen.”

I think Justin’s original question is interesting, though, because it encourages us to think past the specific details of Mele’s book, Dennett’s critique, and the Templeton foundation. Maybe it is because I work at a STEM university, but I often hear talk that the humanities are going to have to move more towards extramural funding. For philosophers, Templeton is where the big money is, but there are also plenty of smaller private foundations, donors funding endowed Chairs (as Zara pointed out), and so on. It’s a timely question. And it is one that invites us to reflect on the similarities and differences between the sciences and philosophy (or the humanities more broadly). I wish more commenters had taken up the call.

I would suggest one analogy and one major disanalogy between science and philosophy in regards to conflict of interest. The analogy is, if I understood him right, what Dennett was getting at: funding applied on a large scale can alter, or even distort, the research agenda of a discipline. And evaluating that will require us to think about what the research agenda ought to look like.

The importance of research agendas in science is the centerpiece of Philip Kitcher’s Science, Truth, and Democracy and Science in a Democratic Society. He describes the ideal research agenda for science or a scientific discipline as well-ordered science (WOS), and he argues persuasively that not only epistemic and internal disciplinary values, but also practical and ethical values are central to determining what counts as WOS. Further, he argues that WOS should be evaluated democratically, in some sense. Because science is a social institution, it is ultimately responsible for serving the public. Kitcher also rightly recognizes the roles of funding sources and individual choices in actually setting research agendas, and argues that individual sciences have a duty to stand up and fight for change when the research agenda in their field is very far from well-ordered.

Likewise, we could ask about what “well-ordered philosophy” would look like. Presumably, many philosophers (like many scientists) would argue that notions of intrinsic intellectual/philosophical merit, strength of argument, and freedom of research should determine the ideal research agenda. I, and I suspect Kitcher as well, would prefer pragmatic, ethical, and political considerations to play a role. Either way, we can ask whether and how funding sources are moving us towards or away from a well-ordered research agenda.

Mele’s work discusses Free Will, argues that contrary to some triumphalist claims, the sciences haven’t settled the question yet, criticizes some of those claims by scientists, and is agnostic about whether free will is compatible with determinism. I’m not sure how those things fit with the ideological agenda of Templeton, though I can understand the feeling that they do, somehow. And insofar as Templeton wants to stay a major player in funding research on Free Will, we could see more of this sort of thing, less of other approaches. Zooming out to the context that Justin invites us to consider, it is worth wondering what the effects of funded research can be on the research agenda of philosophy, and it is worth deliberating about whether some funding sources should be considered a problematic conflict of interest, Templeton included. (My own view, held tentatively, is that Templeton is alright in this respect but should be closely monitored.) But also note, that until one has a sense that funding agencies are having a systematic effect, it doesn’t seem reasonable to criticize individuals in the way that Dennett does (if implicitly).

The disanalogy I would like to mention has to do with the different types of arguments that are made in empirical science and in philosophy. Philosophical arguments are usually scholarly while scientific arguments are generally technical. I mean these in a specific sense inspired by Bruno Latour’s Science in Action (N.B., these terms aren’t the ones Latour uses). To make an argument in philosophy requires nothing more than a library card, writing implements, the ability to adopt the current style of the literature in the field you wish to contribute to, and the ability to craft an argument. Scholarly arguments can be evaluated on their surface—you need only to examine the content of the text itself, and perhaps the cited sources, to understand the argument or produce a counter-argument.

Some elements of scientific texts can be evaluated in this way. But scientific arguments are also technical. In particular, much of the argument hangs on what Latour calls inscriptions—tables, charts, graphs, and figures—produced by instruments. There are hard limits to how far one can interrogate a technical text. One can raise questions about certain inferences and interpretations, and one can examine the equipment and materials that produce the data and the inscriptions, at least, as long as one has an invitation to the relevant laboratory and the patience of one’s host. But past a certain point, making an effective counter-argument requires a counter-laboratory with instruments producing inscriptions that can be used in arguments. To a large extent, the technical nature of modern science is a major source of its power and effectiveness; but a cost is that we have to rely on trust to a greater extent. And conflict of interest is at least a pro tanto reason to withhold that trust, whereas trust is not at issue in philosophical arguments in the same sense.

So while it is incorrect for Jim Griffis to say that “If the ‘science is impeccable, carefully conducted and rigorously argued’ there would be no problem with who paid for the research,” because of the technical nature of science, he is right to say that “for philosophical works, either the argument is cogent or it’s not.”

Full disclosure: I have previously applied for (but not received) Templeton funding.

Indirect and Direct Roles for Values in Science and in Ethics

[TL;DR: If a direct role for values is illegitimate it science, it is also illegitimate in any ethical or practical reasoning about what to do in particular cases, or any evaluations of the rightness or goodness of act action. The direct/indirect role distinction does not distinguish science from action.]

Those who defend or presume the value-ladenness of science are obligated to provide a response to what I call “the problem of wishful thinking,” viz., the epistemic problem of how to prevent value-laden science from leading us to believe whatever we wish, to conclude the world is the way we wish it to be, and thus to destroy the integrity and reliability of science.

One way of dealing with the problem of wishful thinking has been to restrict the type of values allowed to play a role in science to epistemic values. This is not a move most proponents of value-laden science will accept, as they are precisely concerned with the legitimacy of non-epistemic values in science. If the “epistemic” values include such familiar values as scope or simplicity of a theory, it is also insufficient to avoid the problem of wishful thinking, i.e., it may lead us to conclude that the world is simple or covered by a relatively small number of laws without any evidence to that effect.[^1]

Another important attempt to deal with the problem of wishful thinking is Heather Douglas’s introduction of the direct/indirect role distinction, and the prohibition on the use of values in the direct role in the internal processes of science. Here is how Douglas defines indirect and direct:

In the first direct role, the values act much the same way as evidence normally does, providing warrant or reasons to accept a claim. In the second, indirect role, the values do not compete with or supplant evidence, but rather determine the importance of the inductive gaps left by the evidence. More evidence usually makes the values less important in this indirect role, as uncertainty reduces. (Douglas 2009, 96).

The direct role is permissible in certain, relatively “external” decisions in science. For example, we may appeal directly to ethical or social values to defend the decision to pursue some research project over others, i.e., the decision to research improved treatments for malaria rather than improved treatments for male pattern baldness might be directly justified by the better realization of justice or alleviation of suffering of the former over the latter. Likewise, restrictions on research methods on human subjects, such as the requirement of informed consent and no unnecessary harm, should be directly justified by appeal to values, such as respect for persons and non-malfeasance.

The direct role, according to Douglas, is impermissible in internal decisions such as how to characterize data and whether or not to accept a hypothesis based on the evidence. Here, values may indirectly influence the standards of evidence, the amount or strength of evidence we require to accept or reject, but cannot tell directly for or against the hypothesis.

So, on Douglas’s account, there is a distinction to be made between practical decision-making that is directly grounded by values, and scientific inference that is directly grounded by evidence and only indirectly warranted by values. Some philosophers have questioned the clarity of this account (e.g., Elliott 2011), or its appropriateness to the epistemic tasks of scientific inference (Mitchell 2004), but that will not be my tack here. I want to start by questioning Douglas’s account of practical reasoning. I believe that the problem of wishful thinking is as much a problem for practical reasoning as for scientific inference, and that the “direct” role for values is as unacceptable in ethical decision-making as it is in scientific inference. If I’m right about this, then Douglas’s account of the structure of values in science needs to be revised, and the indirect/direct role distinction is inadequate for distinguishing between science and action or science and ethical decision-making.

Consider some very simple cases of practical decision-making.

  1. SUIT: Suppose I am out to buy a suit, and I value both affordability and quality in making such a purchase. It would be wishful thinking to assume that any suit I buy will promote these values. In order to make a decision about which suit to buy, I need to gather evidence about the available suits on which to base my decision. My values tell me what kind of evidence is relevant. But they cannot act as reasons for or against any choice of suit directly.
  2. HIRING: Suppose I am trying to decide who to hire among a number of job candidates. On the one hand, I pragmatically value hiring the person with the best skills and qualifications. On the other hand, I have an ethical/political obligation to uphold fairness and foster diversity. Neither the pragmatic nor the ethical values tell directly for or against choosing any candidate. I need to know the qualifications of particular candidates to know their qualifications. I also need to know about the theories and results of the implicit bias research to know what kinds of evidence to downplay or to keep myself unaware of while making the decision.
  3. HONESTY: Suppose I am a Kantian about lying – it is never permissible. Still, this value does not dictate on its own what speech-acts I should make and refrain from in any particular case. I must at least examine what I know or believe to be true. It would be wishful thinking to assume I was being honest with anything I was inclined to say absent information about whether or not I believed it to be the case. Perhaps I even need to examine my evidence for p before I can assert confidently that p in order to uphold this value.
  4. METHODS: Suppose I am on the IRB at my university. In order to responsibly assess the permissibility of a particular research protocol, I cannot rely directly on the principles of respect, beneficence, non-malfeasance, and justice to decide. Instead, I must carefully read the research protocol and understand what it in fact proposes to do, and I must speculate on possible consequences of the protocol, before I can evaluate the protocol and its consequences.

So, values in these cases do not act directly as reasons for or against a decision. I take it that this is in conflict with Douglas’s implied account of practical reason in Science, Policy, and the Value-Free Ideal (2009). If there is any realm in which values themselves act as grounds in inferences, it may be in pure normative theorizing, the kind that ethics do when they’re doing “normative ethical theory” or political philosophers do when they’re doing “ideal theory.” Values can only serve as direct grounds for claims about other values (if that can do that), not about actions. But these are not the kinds of activities that Douglas points at as “direct” use of values. Indeed, METHODS is just the sort of case that she uses to explain the direct role.

Values in these cases are acting indirectly to connect evidence to claims or conclusions (in particular, about how to act). Is this the same sort of indirect role that she recommends for values in science? We might think so. Just as the value of affordability tells us to look for evidence about prices in SUIT, the relative weight we place on the value of safety tells us to look for a certain kind and weight of evidence when doing a risk assessment for the toxicity of the chemical.

Douglas could revise her account to insist that scientific inference be more indirect that the cases I’ve discussed here. While the absolute value I place on not lying in HONESTY cannot directly tell me what to say in any particular situation, it does tell me what sort of evidence I need (viz. that I believe the proposition) to justify the decision to speak. Douglas could insist that in scientific cases, not only is it illegitimate for values to directly justify, e.g., accepting or rejecting a hypothesis, they also cannot directly tell us whether some evidence supports the hypothesis or is sufficient for acceptance. Rather, the only thing that can legitimately make the connection between evidence and hypothesis is something like a methodological rule, e.g., the rule that the evidence must provide a statistical significance level of p=0.01 to be sufficient for acceptance. Then the only permissible role for values would be the even more indirect role of supporting such methodological rules. The ground for that conclusion is the data itself. That data warrants the conclusion because it meets the methodological criteria (p=0.01). That criteria is appropriate in this case because of our values (the values we place on the consequences of error).

This might or might not be a reasonable way to go. But the prohibition on the direct role was justified by the need to resolve the problem of wishful thinking, and I can see why (and have assumed that) this argument is compelling. But I cannot see that the revised version is needed for resolving the problem of wishful thinking as well, and so I am not sure why the additional prohibition it involves would be warranted.

In “Values in Science beyond Underdetermination and Inductive Risk,” I argued that the main lines of argument in the science and values literature go awry in making bad assumptions about the nature of values and ethical/practical reasoning. I think this point is of a piece with that argument. I’m interested to hear what folks think about it!

[1]: This is, in my opinion, one of the most brilliant moves in (Douglas 2009).

John Dewey on Truth and Values in Science

This post is an abbreviated form of what I have come to think of as the most interesting part of a paper I’m working on for the volume of papers from this summer’s conference at Notre Dame on “Cognitive Attitudes and Values in Science”. For some of the background here, see my 2012 HOPOS paper, “John Dewey’s Logic of Science”.

According to Dewey in Logic: The Theory of Inquiry (1938), inquiry is the attempt to manage situations of agent-environment discordance (what he calls “indeterminate” or “problematic” situations) that interrupt the agents’ practices and activities, to restore unity, determinateness, and harmony to the situation and allow the practice and activity to continue again once impeded. The conclusion of inquiry is called “judgment.” Judgment is not just a statement of what is going on and a hypothesis of what is to be done, but it is a decision to act so as to resolve the problematicity and indeterminateness of the situation that occasioned it. In Dewey’s terms, a judgment has “direct existential consequences.”

Judgments of inquiry are thus what Dewey called “judgments of practice” (see especially the final essay in Essays in Experimental Logic, “The Logic of Judgments of Practice.” Practical judgments are about “things to do or be done, judgments of a situation demanding action”(MW 8:14). This is, by the way, Dewey’s best definition of his pragmatism: pragmatism is the hypothesis that all judgments are judgments of practice.

Dewey points out that judgments of practice have peculiar truth conditions:

Their truth or falsity is constituted by the issue. The determination of end-means… is hypothetical until the course of action indicated has been tried. The event or issue of such action is the truth or falsity of the judgment… In this case, at least, verification and truth completely coincide. (LJP, MW 8:14)

If my judgment is “I should buy this suit,” then that judgment was true if doing so worked out; if the consequences of that judgment are satisfying, they fulfill the needs that prompted buying the suit, they do not have unintended negative consequences, if I do not feel regret for my decision, then it was the right to say that I should buy it. What else could the truth of a judgment of practice involve? And indeed, there is a straightforward way in which truth of the judgment is due to correspondence—the judgment corresponded with the future consequences intended by the judgment.

From a pragmatist point of view, science is a practice, and scientific inquiry, like all inquiry, is an attempt to resolve an indeterminate situation that arises in that practice. The form of the final judgment that concludes an inquiry is what Dewey has called a “judgment of practice.” Like all practical judgments, scientific judgments are true or false according to their consequences. This is not the vulgar pragmatism that would measure the truth of a proposition according to whether the consequences of believing it are congenial. Rather, the consequences in question are tied to the consequences intended by the judgment. As all judgments involve a solution to a particular problem and a transformation of an indeterminate situation, then the truth of that judgment is determined by whether the transformation of the situation, carried out, resolves the problem and eliminates the specific indeterminacy in question.

We can thus provide the following definition of truth:

A judgment J that concludes an inquiry I is a decision to act in a certain way in order to resolve a problematic situation S that occasioned I.

J is true in S iff J resolves S, i.e., if it transforms S from an indeterminate to a determinate situation.

According to Dewey, judgment is a species of action, and indeed a species that can have serious consequences, as it tends to transform human practices and the environments in which they take place. Judgment is a decision to act in a situation in order to resolve the problem that occasioned it. It has direct existential consequences. That judgment is true (or false) in that situation insofar as it succeeds (or fails) in resolving that problem. Both judgment and truth are value-laden on this account.

Judgment is value-laden primarily due to our ordinary ethical and social responsibilities. When we decide to act, it is appropriate to hold us accountable to the appropriate norms of action. When our actions have consequences that impact our lives, we have an obligation to weight those consequences when making a decision. Judgments transform our environments and our practices. Within the limits of what can successfully resolve a problematic situation, we are obligated to make choices in accordance with our best value judgments.

Truth is likewise value-laden, for much the same reason. What counts as an adequate solution depends on what we care about. How we are sensitive to the way our practices impact on others, the environment, etc. will change whether we are able to carry on with the practice or whether it becomes indeterminate. Value judgments alter what we may regard as true. Speaking of environment, the amazing electric scooter is way fast than any other, plus it is safe for the our atmosphere and won´t let out any harmful gases.

Dewey was concerned to show that the advancement of science does not require an abandonment of social responsibility.

My hypothesis is that the standpoint and method of science do not mean the abandonment of social purpose and welfare as rightfully governing criteria in the formation of beliefs… (“The Problem of Truth” MW 6:57)

Our judgments (or our beliefs, if you prefer), are not mere attempts to mirror a static world beyond us, but are attempts to manage and change the world to render the precarious stable, the problematic straightforward, the doubtful trustworthy. Knowing and doing are intimately connected; the act of knowing modifies the thing known. We can thus only answer the question of what we know by appealing, in part, to what we care about—ethically, politically, and socially.

My NDPR Review of Wright, Explaining Science’s Success

Some of you may have already seen my review of John Wright’s Explaining Science’s Success: Understanding How Scientific Knowledge Works that appeared yesterday at NDPR. I tried to write the kind of review that PD Magnus likes to read:

It isn’t just about the book and what the author says in it. Rather, it offers a critical view of the issue and situates the book in recent discussions. It also treats the book as a bit of philosophy worthy of criticism. This contrasts with the veneer of rhetorical objectivity which bad reviews have.

I don’t know if I really succeeded. Some will surely think my review was overly dismissive. Obviously, I thought the book was Not Very Good. While there are some ideas and arguments in the book that I found interesting, what struck me most about the arguments is that they seemed so irresponsible in the light of the contemporary scene in phil sci.

Anyhow, I’d love to hear what people think of the review, especially the points I made that went beyond Wright’s book itself.

Dewey’s Definition of “Cognition”?

This week in CCC we’re reading the first part of Jean Lave’s Cognition in Practice (1988). Lave is one of the major figures in the area of so-called “Situated Cognition.” This sounds to my ear a little bit like the less conservative “Embedded Cognition” approaches which emphasize that environmental situatedness is important for understanding cognition, without thinking that features of the situation are constitutive of cognition. It is clear from the get-go that this is not in fact Lave’s view:

It will be argued here… that a more appropriate unit of analysis is the whole person in action, acting with the settings of that activity. This shifts the boundaries of activity well outside the skull and beyond the hypothetical economic actor, to persons engaged with the world…

It is within this framework that the idea of cognition as stretched across mind, body, activity and setting begins to make sense. (p. 17-18, emphasis added)

I am drawn back (no surprise) to John Dewey. John Dewey says, in the preface of his 1938 Logic, that throughout the work he refers to “inquiry” where he had previously referred to “thinking.” Perhaps we could adapt his definition of “inquiry” as a definition of “cognition” for situated cognition theory:

[Cognition] is the directed or controlled transformation of an indeterminate situation into a determinately unified one. (“The Pattern of Inquiry,” Logic, 1938, LW 12).

Could be a start.

Dewey on Standpoint Epistemology


Women have as yet made little contribution to philosophy. But when women who are not mere students of other persons’ philosophy set out to write it, we cannot conceive that it will be the same in viewpoint or tenor as that composed from the standpoint of the different masculine experience of things.

– John Dewey, Philosophy and Democracy (1919)

Three Direct Roles for Values in Science: A Sketch of a Sketch

Heather Douglas (2000, 2009) has argued that inductive risk requires that scientists make value judgments in the “internal” processes of scientific reasoning, e.g., data characterization and interpretation and judging whether the evidence supports a hypothesis, but that the role for value judgments must be limited to an indirect role. There has been some controversy about just what the direct/indirect roles distinction amounts to (Elliott, Steele), but the basic idea is easy enough to understand: something plays a direct role in a decision if it acts as a reason for deciding one way or the other; it plays an indirect role if it instead helps determines second-order questions about the uptake of reasons, e.g., about what counts as a reason or about determining the necessary weight of reasons before deciding.
Continue reading

Values, Assumptions, and the Science of Consciousness

This is a repost of a post I did on the Center for Values in Medicine, Science, and Technology site, in response to Robert Sawyer’s talk. I’ve posted the video here at the top for those who are interested. 

There were many interesting things brought up by Robert Sawyer in his interesting talk and the various discussions.  I’m glad that we had him as a guest at the Center.  One topic that caught my eye was his focus on the nascent science of consciousness and the associated ideas of human vs. machine intelligence.  I’d like to share some thoughts about the science of consciousness in relation to larger issues of values in science.
Continue reading