Philosophy, Funding, and Conflict of Interest

Featured

A couple of weeks back, Justin Weinberg at the Daily Nous posed a really interesting question. The context was Daniel Dennett’s review of Alfred Mele’s book Free: Why Science Hasn’t Disproved Free Will. Dennett gives a relatively standard story about conflict of interest in science funding using a hypothetical story of saturated fat research funded by “the Foundation for the Advancement of Bacon.” On standard accounts, we are right to apply a higher level of scrutiny towards research whose funding displays a potential conflict of interest, and this is why, e.g., we have COI reporting requirements in certain journals and for federally funded research.

Dennett then points out that Mele’s work is funded by the John Templeton Foundation, which (simplifying a bit) has an ultimate agenda the integration of science and religion, and lately has been funding large projects that involve philosophers, scientists, and theologians working together on a shared theme, like Free Will or Character. Mele has received and managed two such grants.

Here’s Justin:

Mele’s project is not the only Templeton-funded philosophy project, nor is Templeton the only source of funds with an agenda. Dennett is claiming that funding from ideological sources casts a shadow on philosophical research in much the same way that funding from industry casts a shadow on scientific research. Is he correct?

Unfortunately, the question was lost as the thread got hijacked by a lot of nonsense, specific details about Templeton and Dennett’s neo-Atheist anti-Templeton agenda, as well as some understandable pragmatic implications of Dennett’s statements on Mele’s character. Most egregious were the many denials that conflict of interest is an issue in science, that they somehow amounted to a fallacious ad hominem argument. For instance, Terrance Tomkow and Kadri Vihvelin claim “the motives of the researcher or his employers are always beside the scientific point.” Dennett answered this point well when he said,

As for Tomkow and Vihvelin’s high-minded insistence that one is obliged “to ignore” the sponsorship of research, I wonder what planet they have been living on recently. Why do they think that researchers have adopted the policy of always declaring the sources of their funding?

Or as Richard Zach said, “It’s as if the past few decades of work on values in science didn’t happen.”

I think Justin’s original question is interesting, though, because it encourages us to think past the specific details of Mele’s book, Dennett’s critique, and the Templeton foundation. Maybe it is because I work at a STEM university, but I often hear talk that the humanities are going to have to move more towards extramural funding. For philosophers, Templeton is where the big money is, but there are also plenty of smaller private foundations, donors funding endowed Chairs (as Zara pointed out), and so on. It’s a timely question. And it is one that invites us to reflect on the similarities and differences between the sciences and philosophy (or the humanities more broadly). I wish more commenters had taken up the call.

I would suggest one analogy and one major disanalogy between science and philosophy in regards to conflict of interest. The analogy is, if I understood him right, what Dennett was getting at: funding applied on a large scale can alter, or even distort, the research agenda of a discipline. And evaluating that will require us to think about what the research agenda ought to look like.

The importance of research agendas in science is the centerpiece of Philip Kitcher’s Science, Truth, and Democracy and Science in a Democratic Society. He describes the ideal research agenda for science or a scientific discipline as well-ordered science (WOS), and he argues persuasively that not only epistemic and internal disciplinary values, but also practical and ethical values are central to determining what counts as WOS. Further, he argues that WOS should be evaluated democratically, in some sense. Because science is a social institution, it is ultimately responsible for serving the public. Kitcher also rightly recognizes the roles of funding sources and individual choices in actually setting research agendas, and argues that individual sciences have a duty to stand up and fight for change when the research agenda in their field is very far from well-ordered.

Likewise, we could ask about what “well-ordered philosophy” would look like. Presumably, many philosophers (like many scientists) would argue that notions of intrinsic intellectual/philosophical merit, strength of argument, and freedom of research should determine the ideal research agenda. I, and I suspect Kitcher as well, would prefer pragmatic, ethical, and political considerations to play a role. Either way, we can ask whether and how funding sources are moving us towards or away from a well-ordered research agenda.

Mele’s work discusses Free Will, argues that contrary to some triumphalist claims, the sciences haven’t settled the question yet, criticizes some of those claims by scientists, and is agnostic about whether free will is compatible with determinism. I’m not sure how those things fit with the ideological agenda of Templeton, though I can understand the feeling that they do, somehow. And insofar as Templeton wants to stay a major player in funding research on Free Will, we could see more of this sort of thing, less of other approaches. Zooming out to the context that Justin invites us to consider, it is worth wondering what the effects of funded research can be on the research agenda of philosophy, and it is worth deliberating about whether some funding sources should be considered a problematic conflict of interest, Templeton included. (My own view, held tentatively, is that Templeton is alright in this respect but should be closely monitored.) But also note, that until one has a sense that funding agencies are having a systematic effect, it doesn’t seem reasonable to criticize individuals in the way that Dennett does (if implicitly).

The disanalogy I would like to mention has to do with the different types of arguments that are made in empirical science and in philosophy. Philosophical arguments are usually scholarly while scientific arguments are generally technical. I mean these in a specific sense inspired by Bruno Latour’s Science in Action (N.B., these terms aren’t the ones Latour uses). To make an argument in philosophy requires nothing more than a library card, writing implements, the ability to adopt the current style of the literature in the field you wish to contribute to, and the ability to craft an argument. Scholarly arguments can be evaluated on their surface—you need only to examine the content of the text itself, and perhaps the cited sources, to understand the argument or produce a counter-argument.

Some elements of scientific texts can be evaluated in this way. But scientific arguments are also technical. In particular, much of the argument hangs on what Latour calls inscriptions—tables, charts, graphs, and figures—produced by instruments. There are hard limits to how far one can interrogate a technical text. One can raise questions about certain inferences and interpretations, and one can examine the equipment and materials that produce the data and the inscriptions, at least, as long as one has an invitation to the relevant laboratory and the patience of one’s host. But past a certain point, making an effective counter-argument requires a counter-laboratory with instruments producing inscriptions that can be used in arguments. To a large extent, the technical nature of modern science is a major source of its power and effectiveness; but a cost is that we have to rely on trust to a greater extent. And conflict of interest is at least a pro tanto reason to withhold that trust, whereas trust is not at issue in philosophical arguments in the same sense.

So while it is incorrect for Jim Griffis to say that “If the ‘science is impeccable, carefully conducted and rigorously argued’ there would be no problem with who paid for the research,” because of the technical nature of science, he is right to say that “for philosophical works, either the argument is cogent or it’s not.”

Full disclosure: I have previously applied for (but not received) Templeton funding.

Wonder Woman’s Lasso of Truth

Featured

I’ve just begun reading Jill Lepore’s new book about Wonder Woman and William Moulton Marston. So far, I’m finding it to be really thorough and excellent! I was a little disturbed, though, to discover that on the first page of the preface, Lepore makes a basic mistake about one of the key features of the early Wonder Woman comics:

“She had a magic lasso; anyone she roped had to tell the truth”(xi).

She repeats the point in one of the color plates in the middle of the book, which does show Wonder Woman compelling a thug to tell the truth. The accompanying text that connects this work to lie detectors and Marston’s work on deception is ultimately misleading, however.

What’s the problem here? Isn’t it called “the Lasso of Truth?” The problem, as Brian Cronin pointed out a couple of years ago at Comic Book Resources, is that this is actually anachronistic. Marston called this iconic element of Wonder Woman’s gear the “Magic Lasso” or sometimes “Golden Lasso,” not the lasso of truth. And its power has nothing specific to do with the truth, but rather with compelling obedience.

Lasso1

LieDetectorAIt’s a tempting connection to make Marston, after all, invented the lie detector test, or at least, he’s one of its most recognizable developers and proponents. It’s a common and tempting connection to draw:

Here’s Geoffrey Bunn, one of the few historians of psychology to write in detail about Marston:

“Anyone caught in the lasso found it impossible to lie. And because Wonder Woman used it to extract confessions and compel obedience, the golden lasso was of course nothing less than a lie detector.” (Bunn 1997, p. 108)

slaveThe real story behind Wonder Woman’s magic lasso is much more interesting and much stranger. Marston was an experimental psychologist who developed a theory of emotions. According to his theory, the four basic emotions were Dominance, Compliance, Inducement, and Submission. According to Marston, submission was a matter of giving over one’s will to a basically friendly stimulus; not only was it necessarily a pleasant emotion, but it was a necessary component of love and thus of a healthy psyche. What the magic lasso was able to do, it seems, was to place the person bound in an automatic state of submission to the will of the lasso’s wielder, making them happy to do whatever you asked. Including, occasionally, to tell the truth when they intended to deceive. Lasso2

However, more often than not, when Wonder Woman wanted to know whether someone was telling the truth, she’d make use of the very tool that Marston invented for that purpose, a lie detector test based on systolic blood pressure measurements.

LieDetector1

 

Above I called the mistake an anachronism, because while Marston never used the term “Lasso of Truth,” present day comics do refer to it by that name. According to Cronin, this usage began in Wonder Woman volume 2 #2 (1987; Writer: Greg Potter, Artist: George Pérez, Editor: Karen Berger). This is the post-Crisis reboot of Wonder Woman, meaning that it occurred after the Crisis on Infinite Earths mini-series that altered the continuity of the DC Comics universe. Presumably, the creators knew about Marston’s interests in lie detection, and decided to change the powers and name of the lasso accordingly. (On the other hand, the commenters on Cronin’s piece suggests that the usage comes from the Wonder Woman TV show starring Linda Carter, so perhaps the connection was made by the creators of the show.)

Indirect and Direct Roles for Values in Science and in Ethics

Featured

[TL;DR: If a direct role for values is illegitimate it science, it is also illegitimate in any ethical or practical reasoning about what to do in particular cases, or any evaluations of the rightness or goodness of act action. The direct/indirect role distinction does not distinguish science from action.]

Those who defend or presume the value-ladenness of science are obligated to provide a response to what I call “the problem of wishful thinking,” viz., the epistemic problem of how to prevent value-laden science from leading us to believe whatever we wish, to conclude the world is the way we wish it to be, and thus to destroy the integrity and reliability of science.

One way of dealing with the problem of wishful thinking has been to restrict the type of values allowed to play a role in science to epistemic values. This is not a move most proponents of value-laden science will accept, as they are precisely concerned with the legitimacy of non-epistemic values in science. If the “epistemic” values include such familiar values as scope or simplicity of a theory, it is also insufficient to avoid the problem of wishful thinking, i.e., it may lead us to conclude that the world is simple or covered by a relatively small number of laws without any evidence to that effect.[^1]

Another important attempt to deal with the problem of wishful thinking is Heather Douglas’s introduction of the direct/indirect role distinction, and the prohibition on the use of values in the direct role in the internal processes of science. Here is how Douglas defines indirect and direct:

In the first direct role, the values act much the same way as evidence normally does, providing warrant or reasons to accept a claim. In the second, indirect role, the values do not compete with or supplant evidence, but rather determine the importance of the inductive gaps left by the evidence. More evidence usually makes the values less important in this indirect role, as uncertainty reduces. (Douglas 2009, 96).

The direct role is permissible in certain, relatively “external” decisions in science. For example, we may appeal directly to ethical or social values to defend the decision to pursue some research project over others, i.e., the decision to research improved treatments for malaria rather than improved treatments for male pattern baldness might be directly justified by the better realization of justice or alleviation of suffering of the former over the latter. Likewise, restrictions on research methods on human subjects, such as the requirement of informed consent and no unnecessary harm, should be directly justified by appeal to values, such as respect for persons and non-malfeasance.

The direct role, according to Douglas, is impermissible in internal decisions such as how to characterize data and whether or not to accept a hypothesis based on the evidence. Here, values may indirectly influence the standards of evidence, the amount or strength of evidence we require to accept or reject, but cannot tell directly for or against the hypothesis.

So, on Douglas’s account, there is a distinction to be made between practical decision-making that is directly grounded by values, and scientific inference that is directly grounded by evidence and only indirectly warranted by values. Some philosophers have questioned the clarity of this account (e.g., Elliott 2011), or its appropriateness to the epistemic tasks of scientific inference (Mitchell 2004), but that will not be my tack here. I want to start by questioning Douglas’s account of practical reasoning. I believe that the problem of wishful thinking is as much a problem for practical reasoning as for scientific inference, and that the “direct” role for values is as unacceptable in ethical decision-making as it is in scientific inference. If I’m right about this, then Douglas’s account of the structure of values in science needs to be revised, and the indirect/direct role distinction is inadequate for distinguishing between science and action or science and ethical decision-making.

Consider some very simple cases of practical decision-making.

  1. SUIT: Suppose I am out to buy a suit, and I value both affordability and quality in making such a purchase. It would be wishful thinking to assume that any suit I buy will promote these values. In order to make a decision about which suit to buy, I need to gather evidence about the available suits on which to base my decision. My values tell me what kind of evidence is relevant. But they cannot act as reasons for or against any choice of suit directly.
  2. HIRING: Suppose I am trying to decide who to hire among a number of job candidates. On the one hand, I pragmatically value hiring the person with the best skills and qualifications. On the other hand, I have an ethical/political obligation to uphold fairness and foster diversity. Neither the pragmatic nor the ethical values tell directly for or against choosing any candidate. I need to know the qualifications of particular candidates to know their qualifications. I also need to know about the theories and results of the implicit bias research to know what kinds of evidence to downplay or to keep myself unaware of while making the decision.
  3. HONESTY: Suppose I am a Kantian about lying – it is never permissible. Still, this value does not dictate on its own what speech-acts I should make and refrain from in any particular case. I must at least examine what I know or believe to be true. It would be wishful thinking to assume I was being honest with anything I was inclined to say absent information about whether or not I believed it to be the case. Perhaps I even need to examine my evidence for p before I can assert confidently that p in order to uphold this value.
  4. METHODS: Suppose I am on the IRB at my university. In order to responsibly assess the permissibility of a particular research protocol, I cannot rely directly on the principles of respect, beneficence, non-malfeasance, and justice to decide. Instead, I must carefully read the research protocol and understand what it in fact proposes to do, and I must speculate on possible consequences of the protocol, before I can evaluate the protocol and its consequences.

So, values in these cases do not act directly as reasons for or against a decision. I take it that this is in conflict with Douglas’s implied account of practical reason in Science, Policy, and the Value-Free Ideal (2009). If there is any realm in which values themselves act as grounds in inferences, it may be in pure normative theorizing, the kind that ethics do when they’re doing “normative ethical theory” or political philosophers do when they’re doing “ideal theory.” Values can only serve as direct grounds for claims about other values (if that can do that), not about actions. But these are not the kinds of activities that Douglas points at as “direct” use of values. Indeed, METHODS is just the sort of case that she uses to explain the direct role.

Values in these cases are acting indirectly to connect evidence to claims or conclusions (in particular, about how to act). Is this the same sort of indirect role that she recommends for values in science? We might think so. Just as the value of affordability tells us to look for evidence about prices in SUIT, the relative weight we place on the value of safety tells us to look for a certain kind and weight of evidence when doing a risk assessment for the toxicity of the chemical.

Douglas could revise her account to insist that scientific inference be more indirect that the cases I’ve discussed here. While the absolute value I place on not lying in HONESTY cannot directly tell me what to say in any particular situation, it does tell me what sort of evidence I need (viz. that I believe the proposition) to justify the decision to speak. Douglas could insist that in scientific cases, not only is it illegitimate for values to directly justify, e.g., accepting or rejecting a hypothesis, they also cannot directly tell us whether some evidence supports the hypothesis or is sufficient for acceptance. Rather, the only thing that can legitimately make the connection between evidence and hypothesis is something like a methodological rule, e.g., the rule that the evidence must provide a statistical significance level of p=0.01 to be sufficient for acceptance. Then the only permissible role for values would be the even more indirect role of supporting such methodological rules. The ground for that conclusion is the data itself. That data warrants the conclusion because it meets the methodological criteria (p=0.01). That criteria is appropriate in this case because of our values (the values we place on the consequences of error).

This might or might not be a reasonable way to go. But the prohibition on the direct role was justified by the need to resolve the problem of wishful thinking, and I can see why (and have assumed that) this argument is compelling. But I cannot see that the revised version is needed for resolving the problem of wishful thinking as well, and so I am not sure why the additional prohibition it involves would be warranted.

In “Values in Science beyond Underdetermination and Inductive Risk,” I argued that the main lines of argument in the science and values literature go awry in making bad assumptions about the nature of values and ethical/practical reasoning. I think this point is of a piece with that argument. I’m interested to hear what folks think about it!

[1]: This is, in my opinion, one of the most brilliant moves in (Douglas 2009).

New Online Situations

Featured

So, I am rearranging a bit my life on the web. I’ve put up a new “professional” homepage at http://matthewjbrown.net/. I’ve also got a new place to post classes at http://classes.matthewjbrown.net/, though only my current courses are up there at present.

I’ve also decided to move my posts about bourbon and whiskey over to The Whiskey Philosopher. I hope to find time to develop that further in the future.

For now, I’ll just let you wonder what http://commandlineonly.org/ is about.

I am absolute shite at WordPress theming so if anyone has any recommendations, please leave them in the comments.

John Dewey on Truth and Values in Science

Featured

This post is an abbreviated form of what I have come to think of as the most interesting part of a paper I’m working on for the volume of papers from this summer’s conference at Notre Dame on “Cognitive Attitudes and Values in Science”. For some of the background here, see my 2012 HOPOS paper, “John Dewey’s Logic of Science”.

According to Dewey in Logic: The Theory of Inquiry (1938), inquiry is the attempt to manage situations of agent-environment discordance (what he calls “indeterminate” or “problematic” situations) that interrupt the agents’ practices and activities, to restore unity, determinateness, and harmony to the situation and allow the practice and activity to continue again once impeded. The conclusion of inquiry is called “judgment.” Judgment is not just a statement of what is going on and a hypothesis of what is to be done, but it is a decision to act so as to resolve the problematicity and indeterminateness of the situation that occasioned it. In Dewey’s terms, a judgment has “direct existential consequences.”

Judgments of inquiry are thus what Dewey called “judgments of practice” (see especially the final essay in Essays in Experimental Logic, “The Logic of Judgments of Practice.” Practical judgments are about “things to do or be done, judgments of a situation demanding action”(MW 8:14). This is, by the way, Dewey’s best definition of his pragmatism: pragmatism is the hypothesis that all judgments are judgments of practice.

Dewey points out that judgments of practice have peculiar truth conditions:

Their truth or falsity is constituted by the issue. The determination of end-means… is hypothetical until the course of action indicated has been tried. The event or issue of such action is the truth or falsity of the judgment… In this case, at least, verification and truth completely coincide. (LJP, MW 8:14)

If my judgment is “I should buy this suit,” then that judgment was true if doing so worked out; if the consequences of that judgment are satisfying, they fulfill the needs that prompted buying the suit, they do not have unintended negative consequences, if I do not feel regret for my decision, then it was the right to say that I should buy it. What else could the truth of a judgment of practice involve? And indeed, there is a straightforward way in which truth of the judgment is due to correspondence—the judgment corresponded with the future consequences intended by the judgment.

From a pragmatist point of view, science is a practice, and scientific inquiry, like all inquiry, is an attempt to resolve an indeterminate situation that arises in that practice. The form of the final judgment that concludes an inquiry is what Dewey has called a “judgment of practice.” Like all practical judgments, scientific judgments are true or false according to their consequences. This is not the vulgar pragmatism that would measure the truth of a proposition according to whether the consequences of believing it are congenial. Rather, the consequences in question are tied to the consequences intended by the judgment. As all judgments involve a solution to a particular problem and a transformation of an indeterminate situation, then the truth of that judgment is determined by whether the transformation of the situation, carried out, resolves the problem and eliminates the specific indeterminacy in question.

We can thus provide the following definition of truth:

A judgment J that concludes an inquiry I is a decision to act in a certain way in order to resolve a problematic situation S that occasioned I.

J is true in S iff J resolves S, i.e., if it transforms S from an indeterminate to a determinate situation.

According to Dewey, judgment is a species of action, and indeed a species that can have serious consequences, as it tends to transform human practices and the environments in which they take place. Judgment is a decision to act in a situation in order to resolve the problem that occasioned it. It has direct existential consequences. That judgment is true (or false) in that situation insofar as it succeeds (or fails) in resolving that problem. Both judgment and truth are value-laden on this account.

Judgment is value-laden primarily due to our ordinary ethical and social responsibilities. When we decide to act, it is appropriate to hold us accountable to the appropriate norms of action. When our actions have consequences that impact our lives, we have an obligation to weight those consequences when making a decision. Judgments transform our environments and our practices. Within the limits of what can successfully resolve a problematic situation, we are obligated to make choices in accordance with our best value judgments.

Truth is likewise value-laden, for much the same reason. What counts as an adequate solution depends on what we care about. How we are sensitive to the way our practices impact on others, the environment, etc. will change whether we are able to carry on with the practice or whether it becomes indeterminate. Value judgments alter what we may regard as true.

Dewey was concerned to show that the advancement of science does not require an abandonment of social responsibility.

My hypothesis is that the standpoint and method of science do not mean the abandonment of social purpose and welfare as rightfully governing criteria in the formation of beliefs… (“The Problem of Truth” MW 6:57)

Our judgments (or our beliefs, if you prefer), are not mere attempts to mirror a static world beyond us, but are attempts to manage and change the world to render the precarious stable, the problematic straightforward, the doubtful trustworthy. Knowing and doing are intimately connected; the act of knowing modifies the thing known. We can thus only answer the question of what we know by appealing, in part, to what we care about—ethically, politically, and socially.

What is “The Hanged Man”?

Featured

“The Hanged Man” was an online Synchronet BBS Home Screen alias or “handle” I adopted somewhere around 1994, when I didn’t even have access to the internet and instead was using local dialup bulletin board systems (BBS’s). I continued to use the name on into the next millennium, when I started a webpage and got an email address. I’ve continued to use it into the present mostly out of inertia. It’s also a fairly memorable handle, and I’ve had the web domain long enough not to want to give it up.

Continue reading

My NDPR Review of Wright, Explaining Science’s Success

Some of you may have already seen my review of John Wright’s Explaining Science’s Success: Understanding How Scientific Knowledge Works that appeared yesterday at NDPR. I tried to write the kind of review that PD Magnus likes to read:

It isn’t just about the book and what the author says in it. Rather, it offers a critical view of the issue and situates the book in recent discussions. It also treats the book as a bit of philosophy worthy of criticism. This contrasts with the veneer of rhetorical objectivity which bad reviews have.

I don’t know if I really succeeded. Some will surely think my review was overly dismissive. Obviously, I thought the book was Not Very Good. While there are some ideas and arguments in the book that I found interesting, what struck me most about the arguments is that they seemed so irresponsible in the light of the contemporary scene in phil sci.

Anyhow, I’d love to hear what people think of the review, especially the points I made that went beyond Wright’s book itself.

Bowman Brothers Pioneer Spirit

According to the label: Copper Still, Triple Distillation, Virginia Straight Bourbon Whiskey, Small Batch, 90°

Price: $30.

Bottle of Bowman Brothers Pioneer SpiritIn honor of the recently deceased Truman Cox of the A. Smith Bowman distillery, I picked up a bottle of this today, the lowest level of their small batch bourbons. According to Chuck Cowdery, “The whiskey is distilled at Buffalo Trace in Frankfort. The new make is sent to Virginia where it is distilled a third time and entered into barrels. Aging and bottling is done in Virginia,” in the copper pot still mentioned on the label. No age statement, nor does it mention when it was bottled, though the number “12221” printed on the bottle suggest it may be have been bottled 2012-22-1. It does have a cute fake tax stamp on it.

Light tawny honey color. Beautiful sweet nose, fruity and floral, honey and apples. Tastes less sweet than the nose would lead you to believe, with crisp and fresh taste, maybe white grapes and honeydew melon, along with some dried apricot. If there is any problem with this one, it is a slight bitter, astringent note on the finish, which is accompanied by a nice, darker fruit flavor (raisins or dried plums or Beaujolais nouveau).

Overall, an interesting, nice change of pace from what I’m used to in the ryes and bourbons I’ve been drinking lately. I tend to like a lot of rye spice and wood influence, and there’s very little of that here. It is less often I go for the sweeter stuff, though I do occasionally like a really nice wheater (I love Old Weller Antique 107°), and I do like Angel’s Envy, which is definitely on the sweet side. This isn’t really like any of those. Not sure I’ve had a bourbon I would describe as crispy before. Let’s call it a B+.

What this really does is make me want to try the John J. or the Abraham Bowman.