It is not utilitarianism that is cold, but rather, human moral intuition (when unguided by reason). Why do I say that? Consider the following:

When we learn that a friend has suffered a broken bone, we feel some amount of pity for them, and an accompanying compulsion to offer our support and/or help them.

When we hear that a homeless man in our neighborhood died due to hypothermia during the depths of the winter, we feel some (presumably) even greater pity and desire to help.

When we are informed that our own loved one has died or experienced a tragic amount of suffering, we experience an extreme emotional response that even invokes physiological symptoms (anxiety, crying, “lump in throat”, etc.).

When we hear that, across the world, nearly 200 people were killed in a chemical explosion (many burned alive, surely), we think about the scale of the disaster, and feel terrible, but we fail to muster anywhere near as much of an emotional response as we would for the loss of our own loved one. Even though we know, in our minds, that such a disaster is objectively 200x worse than our loss, we can’t possibly generate the same emotional response in our hearts.

Utilitarianism is merely the idea that we should be listening to the voice of reason rather than the voice of intuition.

Why should we expect that our intuition, or “human sensibilities” to use the term from the question, are an accurate guide in moral matters like this? Our intuition/sensibilities were shaped by evolution. Think about where we are coming from…our moral sensibilities were never designed (read: evolved) to comprehend the loss of more than one or two individuals, and they were never designed to consider the moral significance of individuals who aren’t genetically (and hence geographically) close to us. Natural selection has shaped our psychology so that we feel as horrible as possible when people related to us pass away—yet we know that we should feel much worse when 200 people die on the other side of the world. To expect that our moral sensibilities should be accurate while processing the ethics of what happens to those unrelated to us and far away from us is to commit the naturalistic fallacy (conflating “natural” and “good”).

Our built-in sensibilities aren’t capable (presumably due to evolutionary reasons) of accurately processing suffering on a scale larger than a few people. Our ability to feel an emotional response to the suffering of others reaches a maximum pretty quickly:

main-qimg-782b315759e0b85424ef7cba2d63c4ec-c

Utilitarianism acknowledges this, and says that we should use reason to adjust for our inability to intuitively feel the scale of suffering beyond a certain point. It says that when we are aware that 200 people are in need of our help, we should compel ourselves to take 200 times the action to alleviate that need (the moral response scales with the moral need). Of course, we can only perceive a relatively small amount of suffering compared to the amount of suffering we can be aware of. Utilitarianism says that whether we can perceive the suffering above a certain point or not, it still matters just as much, and we should be morally compelled to alleviate that suffering proportionally to its scale.

The same inadequacy in moral intuition is found in comparisons between suffering that is genetically or socially close to us, and suffering that is genetically or socially far from us:

main-qimg-afd0f1198d2a605e716dceccdb81dfd7-c

This is where Utilitarianism is often criticized as cold and calculating, because it teaches that one’s family isn’t any more important than a stranger on the other side of the world. Yet it doesn’t teach that one should care about one’s family less, only that one should care about others more…more than we otherwise would, following our intuition or tradition. What is cold about being as compassionate to strangers as you are to those who are close to you? This is what true compassion looks like. After all, everyone is family to someone. And no one is able to choose which family and which environment they are born into. In ethics, how can we justify helping our own friends and family exclusively? This is a form of selfishness, and it is (almost) unanimously agreed that selfishness is the opposite of morality.

As far as utilitarianism being calculating, this should be a compliment, not a criticism. We have seen that human intuition simply fails to produce the proper moral reaction to certain situations—namely, things that happen to many people, and things that happen to people we don’t know. The reasons for this, I suspect, are evolutionary. What we are thus obligated to do is correct this deficiency, using reason to think about what is right, acknowledging that our own emotions are designed to approximate morality on a small scale—within families and tribes—but real morality must be concerned with the entire universe, not just what or who is closest to us.

Of course, many people who adhere to utilitarianism in principle are unable to follow it perfectly in practice. We don’t respond appropriately to all the injustices across the world that we should care about. We don’t treat everyone equally, even if we are consciously trying to.

I don’t think anyone has ever been a perfect utilitarian (and it would be impossible to know whether they achieved this). But what matters is that we try to do our best.

This page is adapted from the author’s answer to a Quora question. The original question can be seen here.

 

Advertisements

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s