Vulcans Part 1: Do Vulcans Care?
I just got back from an excellent conference on ethics of consciousness (thanks to Uriah Kriegel and Rice University for organizing). At the conference, there was a lot of talk about the hottest new thing in philosophical thought experiments: philosophical Vulcans. These imaginary creatures are the creations of David Chalmers, and he describes them as follows:
Let’s say that a Vulcan is a conscious creature who experiences no happiness, suffering, pleasure, pain, or any other positive or negative affective states. The Vulcans on Star Trek aren’t quite as extreme as this: they experience lust every seven years and experience at least mild pleasures and pains in between. To avoid confusion with Star Trek we could call our version philosophical Vulcans, by analogy to philosophical zombies. [...]
Vulcans’ lives may be literally joyless, without the pursuit of pleasure or happiness to motivate them. They won’t eat at fine restaurants to enjoy the food. But they may nevertheless have serious intellectual and moral goals. They may want to advance science, for example, and to help those around them. They might even want to build a family or make money. They experience no pleasure when anticipating or achieving these goals, but they value and pursue the goals all the same (2022, p.327).
So philosophical Vulcans, unlike philosophical zombies, do have consciousness. They have experiences of color, sound, texture, etc. They just don’t feel any pleasure or displeasure (I prefer “displeasure” to “pain” since there are lots of unpleasant experiences like nausea and itchiness that are not painful.) Vulcans are constitutively incapable of having pleasant and unpleasant experiences — not even a tiny little bit, not ever.
Why does this matter? Because, Chalmers thinks, philosophical Vulcans can teach us something about the ethical significance of pleasure and pain: namely that they’re not nearly as important as many philosophers think. For example: utilitarians, and especially followers of Jeremy Bentham, have thought that pleasure and pain are all that matter from the moral perspective. This is one way of reaching the conclusion that, because philosophical Vulcans don’t feel pain or pleasure, they don’t matter morally in the same way that people or animals do. I agree with this general conclusion, but Chalmers thinks that it is all wrong:
Suppose you’re faced with a situation in which you can kill a Vulcan in order to save an hour on the way to work. It would obviously be morally wrong to kill the Vulcan. In fact, it would be monstrous. It doesn’t matter that the Vulcan has no happiness or suffering in its future. It’s a conscious creature with a rich conscious life. It cannot be morally dismissed in the way that we might dismiss a zombie or a rock. (2022, p.327-328)
I think Chalmers is mistaken. I think we should think about Vulcans in pretty much exactly the same way that we think about zombies and rocks. This isn’t to say that we should destroy them for frivolous reasons. Even zombies and some rocks shouldn’t be destroyed for frivolous reasons. Zombies are complex and interesting creatures, and we shouldn’t go destroying complex and interesting things for frivolous reasons. Similarly for rocks — some rock formations are interesting and beautiful, and we shouldn’t destroy them for no reason.
But zombies and rocks do not have interests. They do not care about anything, in any morally relevant sense. Nothing can be good or bad for them; nothing can be done to harm or help them. So they don’t matter morally in the same way that people and animals matter morally. And Vulcans are the same. They are complex and interesting, but they don’t matter morally in the way that people and animals matter morally. So, if you have a choice between hitting a Vulcan or a squirrel — for example — with your car, and if your only concern is for the injury or death of the creature you hit, then you should definitely hit the Vulcan.
Clearly, then, I don’t think much of Vulcans! Indeed, I think that my attitudes towards Vulcans are of the sort that Chalmers calls “monstrous”. I certainly don’t want to have monstrous moral beliefs, so that’s all the more reason to plead my case here.
Here is the basic idea. Chalmers writes that Vulcans “value and pursue goals,” and I agree that it’s possible to do this in the absence of pleasant or unpleasant experiences. Furthermore, when we human beings value and pursue goals, our goal-directed thought and action is a source of substantial reasons. If you value and pursue athletic achievement, then I have substantial reason to refrain from interfering with your pursuit of athletic achievement. But when Vulcans value and pursue goals, they do so in a radically different way than us. And the differences are such that their goal-directed thoughts and actions are not a source of substantial reasons.
The way to see this is to separate out the two parts: the “valuing” and the “pursuing”. I contend that, in the absence of affective experience, neither valuing nor pursuing goals is morally significant. In other words, they are not sources of substantial moral reasons. Valuing, in the absence of affective experience, consists merely in detached moral judgments, and detached moral judgments are not sources of substantial reasons. Pursuing goals, in the absence of affective experience, consists merely in bare dispositions, and bare dispositions are not sources of substantial moral reasons, either. Nor do detached moral judgments and bare dispositions generate substantial reasons when they are combined together. So, when Vulcans “value and pursue goals”, there is nothing of moral significance at stake. (As an aside, I don’t think any of this is immediately obvious, but I am going to try to show that it’s plausible on reflection.)
Start with the point about detached moral judgments. To make the issues as clear and concrete as possible, I want to think through a specific case. So here it is:
Dogwatchers: Three women — Asha, Baara, and Clara — have promised to take care of their neighbor’s dog while he is away. Their only responsibility is to feed the dog twice per day. To make things even easier, they can feed the dog with the press of a button, and the button flashes red when it’s feeding time. The dog is visibly miserable and upset when he’s not fed on time.
Asha is responsible for feeding the dog on the first day. But when the button starts flashing, she does not press it. She hears the dog whimper and whine, and she thinks the following thoughts: “I promised to press the button, and it would cost me nothing to press it. I can also tell that the dog is very unhappy, and he’d be very happy if I pressed the button. All in all, I’m definitely obligated to press the button.” But still, she doesn’t press it, is not even motivated to press it. She is not at all distressed by the dog’s distress, nor is she at all guilty about the prospect of breaking a promise. How creepy! Asha is a lot like the character of the amoralist, used in meta-ethics to make points about moral motivation. But for our purposes, the relevant question is different. Our question is whether Asha’s moral judgment is a source of reasons to press the button.
I think the answer is definitely “no”. Of course, we do have good reason to press the button for the dog’s sake, and for the sake of the dog’s owner. We’d be helping the dog, and his owner, by pressing the button. But we wouldn’t be doing Asha any favors. It’s all the same to her whether the dog gets fed or not. Her detached moral judgment does not generate any reasons to press the button.
So much for detached moral judgments. Now consider the case of a bare disposition. After Asha fails miserably on the first day, she is replaced by Baara on the second day. And thankfully, when Baara sees the button flashing, she immediately walks over and presses it. But this is because Baara has a very strange sort of disposition: whenever she sees a button flashing, she simply finds herself walking over to press it. This is simply a brute disposition of Baara’s, a product of some bizarre functional state in her brain. She derives no satisfaction from pressing buttons in this way, nor is she in the least bit distressed when someone or something interferes with her button-pressing. Her disposition isn’t backed by any kind of felt motivation at all. Our question is whether this bare disposition is a source of reasons to press the button.
Again, I think the answer is definitely “no”. Baara is a lot like Warren Quinn’s radio man, which Quinn uses to argue that brute-functional “desires” do not rationalize action. The point here is similar: from the fact that Baara has a bare disposition to press flashing buttons, it does not follow that it really matters to her whether flashing buttons get pressed. The same is true if she has a bare disposition to feed dogs, or to act as her neighbor instructs. Of course, if we were to prevent any of these dispositions from manifesting (by destroying the button, for example) then we’d be setting back the interests of the dog and his owner. But we wouldn’t be setting back Baara’s interests. It’s all the same to her whether the dog gets fed or not. Her bare disposition does not generate any reasons to press the button.
What if we combine the bare disposition with detached moral judgments? This is just what happens on the third day, when Clara shows up to feed the dog. Like Asha, Clara judges that she is obligated to feed the dog. And like Baara, Clara has a bare disposition to press flashing buttons (or feed dogs, or obey neighbors). But like Asha, Clara does not feel the least bit distressed by the dog’s distress, nor does she feel the least bit guilty about breaking her promise to her neighbor. And like Baara, Clara is not the least bit gratified when her disposition manifests, nor is she the least bit distressed when something prevents it from manifesting. So Clara combines Asha’s detached moral judgment with Baara’s bare disposition. Is this combination a source of substantial reasons to press the button?
Again, I think the answer is “no”. To be sure, there can be cases in which individually insignificant events combine to make something that is morally significant. If you and I consent to a contract, then our mutual consent is significant whereas you or I consenting alone would be morally insignificant. But the present case isn’t like that. Detached moral judgements and bare dispositions are not essential parts of a morally significant concern or interest, in the way that individual acts of consent are essential parts of a mutually-binding contract. Rather, Asha’s judgment and Baara’s disposition are two kinds of evidence for morally-significant concern or interest. They are evidence because normally, when we judge that we are obligated to do something, or when we are disposed to do it, we have a morally significant interest in its getting done. But Asha and Baara are not normal people — their judgments or dispositions do not correspond with their genuine concerns — and so the evidence is misleading in their case. All that’s happening in Clara’s case is that these two kinds of misleading evidence are being heaped together. This simply makes the evidence all the more misleading. It doesn’t change the fundamental moral insignificance of the relevant thoughts and actions.
Here is another way of making the point. We can make sense of the fact that it’s all the same to Asha whether the dog gets fed, despite her moral judgment. And we can make sense of the fact that it’s all the same to Baara whether the dog gets fed, despite her bare disposition. As long as we hold these facts in mind, we can perfectly make sense of the fact that it’s all the same to Clara whether the dog gets fed, despite her moral judgment and her bare disposition. So, while we should agree with Clara’s judgment, and we should not interfere with her disposition, this has everything to do with the dog and nothing to do with Clara.
Vulcans, I think, are just like Clara. They have moral beliefs, and they have dispositions to behave in accordance with those beliefs. This combination of features is ordinarily indicative of morally significant concerns, concerns which generate reasons for action and reasons against interference. But Vulcans lack these sorts of morally significant concerns, for the same reason that Clara lacks them. Detached moral judgments and bare dispositions simply do not generate morally significant concerns. That is my preliminary case for thinking that Vulcans have different — and lesser — moral status in comparison with people and animals.
There is a lot more to say. For one thing, friends of Vulcans might argue that Vulcans are capable of more than disinterested moral judgments and bare dispositions, even in the absence of affective experience. For another thing, friends of Vulcans might appeal to other features of their psychology to argue that they have genuine concerns. And finally, they might claim — as Chalmers does — that Vulcans would have human-like moral status even if they did not care about anything.
I think all of these objections can be met. I believe there is a strong case for thinking that affective experience is necessary for having genuine concerns that are a source of substantial moral reasons. And I believe there is a strong case for thinking that these sorts of genuine concerns are what separates humans and animals from zombies and rocks (so far as moral status is concerned). What is needed is a strong positive argument for the centrality of affective experience for moral status. But, whoops, I've already written more than 2000 words. I’m going to call this “Part One” and save the rest for another post.
Recent Posts
See AllThis is my second post about Vulcans: creatures that value and pursue goals, but do not have any feelings of pleasure or displeasure. In...
Phenomenal realism and phenomenal irrealism are two philosophical views about consciousness. The former view says that there are things...
Have you considered the possibility that perhaps there's something incoherent about the idea of conscious experience existing without a pleasure/displeasure valence?
And even if that concept isn't incoherent, I feel pretty sure that a conscious experiencer without any sense of pleasure/displeasure would be nothing more than a passenger in their body, lacking anything like true agency.
1) When it comes to normal people, I don't think of valuing as the same as or even entailing having moral judgments, and I wasn't sure why I should believe that it would be different for Vulcans
2) In the button-pressing examples, you talked about whether the agents' judgments/positions "generate reasons to press the button," but it wasn't clear whose reasons you were talking about, the agent themselves or a third party. Third party seemed more likely, but that still seemed weird because it didn't seem like a third party's pressing the button would satisfy the agent's obligation
3) It was funny that the examples were about an agent's moral obligations because it seems questionable whether morally loaded desires are…
Locke takes a position that seems to be at odds with both you and Chalmers, namely that desire is incompatible with P-Vulcanism, because it is inherently a state unease. (Essay 2.20.29 and forward). IIRC, in an earlier draft of the Essay, he had a prospective pleasure model of desire/the will, but revamped it because he was worried that you can't explain why merely thinking about an absent good that would be pleasurable would set you into motion if you were content at present. So, desire has to be a state of *present unease* at the absence of it's object, not the mere prospect of pleasure at its attainment. P-Vulcans then, aren't really having volitions/willings/desirings.
Some parts of what you write…
Daniel,
Now you've done it. Don't piss off the nerds who will defend Spock to the death. The depths of the Star Trek sub-culture will make Chalmers rue his thought experiment and spread it all the same.
I find your position strong and would like to add a caveat for the instrumental worth that a philosophical Vulcan (pV) could have in raising overall utility. At the very least, extending moral value to pV expands our moral compass to consider the greater good. I cannot claim that I would never sacrifice a few squirrels to keep a pV functioning long enough to help nurture a dog or enable a human to integrate a function. It's not a straightforward zero-sum equation.
pVs…