This is a repost of a post I did on the Center for Values in Medicine, Science, and Technology site, in response to Robert Sawyer’s talk. I’ve posted the video here at the top for those who are interested.
There were many interesting things brought up by Robert Sawyer in his interesting talk and the various discussions. I’m glad that we had him as a guest at the Center. One topic that caught my eye was his focus on the nascent science of consciousness and the associated ideas of human vs. machine intelligence. I’d like to share some thoughts about the science of consciousness in relation to larger issues of values in science.
From my perspective on the intersection of values with medicine, science, and technology, one very interesting question about different approaches to consciousness is the way that that the starting assumptions in each approach reflect different value-perspectives. This is especially pressing in an area like consciousness studies, where philosophical considerations loom so large, and there is so little unambiguous data or uncontroversial interpretation of the facts to constrain theorizing. Critical engagement with the values implicit in such assumptions can be a powerful tool in assessing current approaches and suggesting alternatives, as has been powerfully shown by, among others, feminist scientists and feminist philosophers of science like Ruth Doell and Helen Longino.
I’m “thinking out loud” on some of this, so please bear with me, and I’d love to hear your thoughts.
To even hope and try for a science of consciousness is to prefer explanation, understanding, human progress in the present, etc. to mystery, faith, the ineffable, salvation in the hereafter, etc. This may seem to be a trivial move, but I think it isn’t. For example, in various writings, Stephen Jay Gould defended the idea of non-overlapping magisteria, which laid issues of ultimate meaning and moral value orthogonal to the proper realm of science. Gould went so far to say that the idea of the soul was not a scientific hypothesis for proof or refutation. If this doesn’t strictly imply a stance on the possibility of a science of consciousness, it certainly suggests one.
More interesting, perhaps, is the way that specific proposals for a science of consciousness still implicate issues of value. Take the premise of Sawyer’s WWW books, that the internet might become an intelligent, conscious entity. Now, this is a consciousness far different from ourselves: it has no body, no ordinary physical needs or activities of the sort that the human brain spends most of its time dealing with. While of course it has inputs and outputs, those aren’t tied to embodied perception or motor-activity. And the only other actual forms of intelligence we encounter (if any!) are creatures more like ourselves, in fact even more tied to their embodiment: apes, monkeys, dolphins, etc. On the other hand, what Sawyer is suggesting, in effect, is that any sufficiently complex information-processing system with the right features could become conscious.
Now, if an entity like the internet can become conscious, this presents a challenge to biologically/evolutionary-based accounts of consciousness, in which the best guess is that the conscious mind came about because it conveys some adaptive advantage to living creatures. This need not be understood as the claim that all our mental activity is aimed at survival (such a view was demolished by evolutionary psychologist William James as early 1878), but it is crucial on such views to understand that the reason we have a conscious mind is because of the survival advantage it gives us, and that impacts the kind of thing consciousness is and its structure, makes both tied to our embodied, living activities.
Beyond this simple disagreement, perhaps, is a difference of values. Saying that consciousness is neither embodied nor a property of activity creates a separation of mind and body, theory and practice, thought and activity and insists that mind, theory, thought are the kinds of things that matter to an intelligent, conscious entity. John Dewey (another early American psychologist) frequently argued that such a preference was reflective of class divisions going back to ancient Greece, where the slaves labored and the wealthy contemplated. Dewey argued that a more inclusive look at the breadth of human experience showed that intelligence, mind, consciousness, etc. were all practices aimed at the getting of needful things, the improvement of life, the direction of activity. Even the most apparently contemplative activity, when functioning properly, ought to be seen as rendering some practice more intelligent, as guiding some matter of embodied need or satisfaction.
It seems to me that certain views of consciousness which tie it to the brain and ignore the body, or focus on the individual to the exclusion of the social element in experience, might be open to similar critiques, but I’ll leave that as an exercise for the reader.
I wonder to what extent speculations about artificial minds spontaneously “evolving” self-consciousness depends on assumptions that evolution is directional or works by “moving up” some sort of chain of life: first you’re an algal mat, then an insect, then a reptile, then a shrew, a monkey, and finally a human, complete with self-consciousness. My only real data here are the couple dozen Star Trek episodes where self-conscious computers happened, and similar sorts of instances in other science fiction. And science fiction writers generally seem to think about evolution in exactly this way. The singularity folks seem to infer from really, really fast computers with lots of storage space to superconsciousness, which would seem to rely on such an assumption — though I have to admit I don’t pay much attention to these folks.
I’m not sure about Sawyer, but what you say is absolutely right about Kurzweil – he fallaciously imputes to evolution a direction of improvement, and he sees technology as just a continuation of that evolution by other means.
Hi.
Many years ago I got an EE degree from UTD.
I found your blog after reading your paper on the relativity interpretation of quantum theory. Nice paper!
This is the first post I’ve read because the title jumped out at me–after the whiskey posts that is! I think the focus on embodiment here makes sense. As you may know, embodiment is a well-developed theme in the phenomenological tradition, exemplified by Husserl’s work on proprioception and kinesthesia, or Merleau-Ponty’s work on perception. Several people are currently doing some very interesting work trying to synthesize this approach with recent developments in cognitive science. In case you are not aware of this work, I thought you might like to hear about it. The people I have in mind are Alva Noe (http://socrates.berkeley.edu/~noe/), Shaun Gallagher (http://www.memphis.edu/philosophy/bios/gallagher.php), Dan Zahavi (http://cfs.ku.dk/staff/?id=34520&f=1&vis=medarbejder), and Evan Thompson (http://evanthompson.me/). I’m just a software developer, but I think their work is valuable.
I also like the way you connect embodiment to a priority of practice over theory. I am not familiar with Dewey’s writings in that area.
Finally, I am especially glad to have the bit from James on Spencer. Very good stuff there.
Anyway, nice post!
Thank you for the comment! I’m not as familiar with the phenomenological tradition as I probably should be, though I’ve definitely read some work by Noe and Thompson (and I’ve generally aware of the thinking of the others). I’d put them under the general heading of “enactivism,” which is an interesting but often hard to understand set of views.