Friday, May 7, 2010

A reply to Sam Harris

[A friend of mine enthusiastically linked to this Sam Harris piece on Facebook. I found it dreadful and philosophically incoherent, even on its own terms. I thought I'll post what I wrote about it in my email to my friend. I've tried to keep the post contained and my point is to show only that Harris point breaks down even if one accepts all his assumptions about morality.]


I was not suggesting that science can give us an evolutionary or neurobiological account of what people do in the name of "morality." Nor was I merely saying that science can help us get what we want out of life. Both of these would have been quite banal claims to make (unless one happens to doubt the truth of evolution or the mind's dependency on the brain). Rather I was suggesting that science can, in principle, help us understand what we should do and should want -- and, perforce, what other people should do and want in order to live the best lives possible. My claim is that there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within reach of the maturing sciences of the mind. As the response to my TED talk indicates, it is taboo for a scientist to think such things, much less say them in public.

Sam Harris wants not just a descriptive scientific theory of morals but a prescriptive one, a Science of Morals that helps people take moral decisions, but not just any decisions. It enables them to choose the right alternative where "right" is defined in some objective way. In other words, he says he wants to discover (or invent, or at the very least believes in the possibility of) an algorithm that helps people take the correct decision.

Now what would this algorithm be like? Are moral decisions subjective or objective?

Here Harris brings in John Searle. Searle says that subjective and objective, both of these words, have two senses, epistemological and ontological. So according to him, a scientific theory of consciousness (which doesn't exist yet) has to be epistemologically objective (meaning that the theory uses "inputs" that are accessible to any interested person) but the content of consciousness is still ontologically subjective (meaning only I have access to my states of consciousness).

So a scientific theory of morals would be like a scientific theory of consciousness: it would be epistemologically objective, meaning any interested person can get to the same conclusion if they applied it. But is the decision produced by this moral theory an ontologically objective one? Or is it ontologically subjective like conscious states?

Harris says that the scientific theory of morals should take into account states of consciousness and should promote those indicate the "well-being" of a person. Apparently advances in neuroscience and a better scientific understanding of consciousness will help us understand "well-being" neurobiologically i.e. scientifically i.e. objectively.

Therefore, we can have not just a descriptive scientific theory of morality but a prescriptive theory too. It will work like this. It will allow a person to make decisions that maximize his or her well-being. And since future developments in neuroscience will allow us to define "well-being" objectively, in terms of a person's conscious states, our prescriptive theory of morals will be objective also.

Well, not so fast.

If well-being is a conscious state, is it ontologically subjective or objective? If I know my Searle correctly, Searle thinks that ALL states of consciousness are ontologically subjective. So an algorithm that helps me make moral decisions depends on well-being, and well-being is ontologically subjective - how is a prescriptive theory of morals going to be ontologically objective?

Harris will probably say that this is wrong, a theory of consciousness will result in a definition of well-being that is ontologically objective. He gives this analogy in support:
As I said in my talk, the concept of "well-being," like the concept of "health," is truly open for revision and discovery.
So "well-being" is like health. I can prove someone is ill, even if he continues to deny it, by shoving a thermometer into his arm-pit and pointing out to him that his body temperature is higher than normal. A scientific theory of consciousness will allow us to produce thermometer-like instruments that can measure a person's well-being objectively, and thereby open the way to an ontologically objective prescriptive scientific theory of morals.

Whew. But this is all too abstract. Let's take specific cases and see how well this theory holds up.

Suppose I have a moral dilemma: should I take a job which lets me help people (which I want to do)? Or should I get a job which lets me do research on cognition (something that's not really going to help people, but which I would love to do)? Clearly I cannot do both. How will Harris' proposed theory help me? Presumably, it will calculate my well-being were I to choose any one of these alternatives and then tell me the one that maximizes my well-being.

But consider, dear reader, whatever I decide, I will miss something, yes? And that something will affect my state of well-being, no? If I decide to do research in cognition, what makes this a better moral choice? But there's more. The algorithm has to calculate my future well-being if it can help me make a choice. How does it do that? If I do choose a career as a researcher in cognition on the algorithm's recommendation and I hate it (because I don't get along with my colleagues), isn't the algorithm wrong?

But you will probably say that I am missing the point.

Harris doesn't want a theory that helps us make these day-to-day personal decisions.

He wants a theory that can yield results like these: (1) Not wearing a veil is objectively better than wearing a veil because it promotes a better state of well-being. (2) Liberal secular democracies are objectively better than theocracies because the "well-being" promoted by the former is greater than that promoted by the latter. (3) The "well-being" of religious people is objectively less than that of atheists or non-religious people. You get the drift.

I wish with all my heart that a theory that proves these things objectively will come into existence. But this reveals all the weaknesses in Harris' piece.

First of all, why should a prescriptive theory of morals apply only to these situations and not to my day-to-day personal decisions? After all if well-being can be defined in an ontologically objective way, then it should also apply to ALL decisions that affect my well-being, no?

But second, how is this prescriptive theory going to account for a woman who chooses to wear a burqua outside? And insists that she feels better with it. Harris would presumably drag her, kicking and screaming, to his well-being center, wire up her brain and demonstrate triumphantly to her that she really isn't well, despite her assertions to the contrary.

Harris would be far better off trying to define his "well-being" concept in terms of concrete things. E.g. by using statistics that demonstrate the better health, mobility, longer life, etc. of women who choose not to wear a veil to those who do. But that's not "objective" enough for him (it isn't, but I would be fine with that, it is far more coherent). People like him are always looking for "deep" reasons to demonstrate conclusions that are dear to them. And because Harris is a neurobiologist, he thinks some future scientific explanation of consciousness (derived, of course, from neuroscience) will help him with that. Well, best of luck with that!

[By the way, I see many parallels between this and the Marxist notion of "false consciousness."]

No comments: