Heather Douglas (2000, 2009) has argued that inductive risk requires that scientists make value judgments in the “internal” processes of scientific reasoning, e.g., data characterization and interpretation and judging whether the evidence supports a hypothesis, but that the role for value judgments must be limited to an indirect role. There has been some controversy about just what the direct/indirect roles distinction amounts to (Elliott, Steele), but the basic idea is easy enough to understand: something plays a direct role in a decision if it acts as a reason for deciding one way or the other; it plays an indirect role if it instead helps determines second-order questions about the uptake of reasons, e.g., about what counts as a reason or about determining the necessary weight of reasons before deciding.
Continue reading
Category Archives: Philosophy
Values, Assumptions, and the Science of Consciousness
This is a repost of a post I did on the Center for Values in Medicine, Science, and Technology site, in response to Robert Sawyer’s talk. I’ve posted the video here at the top for those who are interested.
There were many interesting things brought up by Robert Sawyer in his interesting talk and the various discussions. I’m glad that we had him as a guest at the Center. One topic that caught my eye was his focus on the nascent science of consciousness and the associated ideas of human vs. machine intelligence. I’d like to share some thoughts about the science of consciousness in relation to larger issues of values in science.
From my perspective on the intersection of values with medicine, science, and technology, one very interesting question about different approaches to consciousness is the way that that the starting assumptions in each approach reflect different value-perspectives. This is especially pressing in an area like consciousness studies, where philosophical considerations loom so large, and there is so little unambiguous data or uncontroversial interpretation of the facts to constrain theorizing. Critical engagement with the values implicit in such assumptions can be a powerful tool in assessing current approaches and suggesting alternatives, as has been powerfully shown by, among others, feminist scientists and feminist philosophers of science like Ruth Doell and Helen Longino.
I’m “thinking out loud” on some of this, so please bear with me, and I’d love to hear your thoughts.
To even hope and try for a science of consciousness is to prefer explanation, understanding, human progress in the present, etc. to mystery, faith, the ineffable, salvation in the hereafter, etc. This may seem to be a trivial move, but I think it isn’t. For example, in various writings, Stephen Jay Gould defended the idea of non-overlapping magisteria, which laid issues of ultimate meaning and moral value orthogonal to the proper realm of science. Gould went so far to say that the idea of the soul was not a scientific hypothesis for proof or refutation. If this doesn’t strictly imply a stance on the possibility of a science of consciousness, it certainly suggests one.
More interesting, perhaps, is the way that specific proposals for a science of consciousness still implicate issues of value. Take the premise of Sawyer’s WWW books, that the internet might become an intelligent, conscious entity. Now, this is a consciousness far different from ourselves: it has no body, no ordinary physical needs or activities of the sort that the human brain spends most of its time dealing with. While of course it has inputs and outputs, those aren’t tied to embodied perception or motor-activity. And the only other actual forms of intelligence we encounter (if any!) are creatures more like ourselves, in fact even more tied to their embodiment: apes, monkeys, dolphins, etc. On the other hand, what Sawyer is suggesting, in effect, is that any sufficiently complex information-processing system with the right features could become conscious.
Now, if an entity like the internet can become conscious, this presents a challenge to biologically/evolutionary-based accounts of consciousness, in which the best guess is that the conscious mind came about because it conveys some adaptive advantage to living creatures. This need not be understood as the claim that all our mental activity is aimed at survival (such a view was demolished by evolutionary psychologist William James as early 1878), but it is crucial on such views to understand that the reason we have a conscious mind is because of the survival advantage it gives us, and that impacts the kind of thing consciousness is and its structure, makes both tied to our embodied, living activities.
Beyond this simple disagreement, perhaps, is a difference of values. Saying that consciousness is neither embodied nor a property of activity creates a separation of mind and body, theory and practice, thought and activity and insists that mind, theory, thought are the kinds of things that matter to an intelligent, conscious entity. John Dewey (another early American psychologist) frequently argued that such a preference was reflective of class divisions going back to ancient Greece, where the slaves labored and the wealthy contemplated. Dewey argued that a more inclusive look at the breadth of human experience showed that intelligence, mind, consciousness, etc. were all practices aimed at the getting of needful things, the improvement of life, the direction of activity. Even the most apparently contemplative activity, when functioning properly, ought to be seen as rendering some practice more intelligent, as guiding some matter of embodied need or satisfaction.
It seems to me that certain views of consciousness which tie it to the brain and ignore the body, or focus on the individual to the exclusion of the social element in experience, might be open to similar critiques, but I’ll leave that as an exercise for the reader.