Parfit, Esoteric Morality, and Consequentialism
This is a short follow-up to my last post on esoteric morality. In this last post, I discussed the relevance of the view that morality can be esoteric in the context of decision-making under risk. I argued that because the right attitudes toward risk are unlikely to be determined by any moral theory and because most moral decision-making is made under risk, the view is implausible. Here, I shall briefly consider whether this view makes sense at all.
My thinking has been triggered by rereading Derek Parfit’s Reasons and Persons (R&P), which discusses the notion of esoteric morality in the context of the possible “self-effacing” nature of consequentialism.[1] In Section 17 of R&P, Parfit indeed contemplates the possibility that “theory C” – referring to a broad range of consequentialist theories – might be self-effacing in the sense that it might be better according to C that nobody has the dispositions, intentional attitudes (beliefs, desires), and emotions leading us to act according to C. Parfit gives the example of theft. From an (act) consequentialist perspective, the best state of affairs may be one where we accept some level of theft because, e.g., this is the only way for the most desperate to improve their conditions without harming significantly the richest.[2] On this version of C, avoiding theft is only instrumentally, but not intrinsically good, and therefore what we should do is contingent on the circumstances. It might be conceivably true however that knowing this will favor more occurrences of thefts in the population than what is best. The second-best would be then that everyone forgets C and abides by what Sidgwick calls “common-sense morality”, a morality that tends to universally condemn theft. If this is true, C would be self-effacing in Parfit’s sense.
Parfit has an interesting discussion about whether the fact that a moral theory is self-effacing implies that it is false or not the best available theory. He argues that it depends on our view about the nature of morality, i.e., whether it consists of independent truths unaffected by our views about them, or if it is a “social product” relying on a publicity condition – this echoes my discussion in the aforementioned post. Parfit suggests however that the most likely case is one where the best state of affairs as evaluated by C itself is for a small fraction of the population to know and act in accordance with C, and for the rest of the population to follow common-sense morality. In this scenario, C is “partly self-effacing, partly esoteric”. The reason this scenario is more likely is that causing ourselves, in one way or another, to not believe and act according to C would make it impossible to rely on it in case, due to a change of circumstances, acting based on C would lead to the best outcome. The best would thus be for C to be partly esoteric, being known and followed by the happy few.
Parfit does not discuss the possibility that C is esoteric any further. In my preceding post, I suggested that the size of the subgroup that knows and acts according to C is settled by a threshold such, if more people know and act according to C than the threshold indicates, that will lead to a worse outcome than if we remain below the threshold. But there seems to be some inconsistency. More precisely, let's say that according to C the best outcome O* results if and only if a fraction T of the population chooses A, and the rest chooses B (by acting based on common sense morality for instance). If everyone knows C, then what we have is a coordination problem, but there is nothing that indicates that C must be esoteric. People just know that what they ought to do depend on what others are doing. In this case, the best theory might be a form of “cooperative utilitarianism” (CU). The implementation of CU might be difficult or even impossible because coordination and collective action problems are difficult to solve. Using Parfit’s terminology, CU may even be “indirectly self-defeating”: if everyone acts based on CU, coordination problems may lead to worse outcomes for each person than if people act based on another moral theory. But this does not establish that CU should be esoteric – quite the contrary indeed.[3]
If we want to make C esoteric, what we need is that it prescribes to everyone that she should act in some specific way, e.g., to refrain from stealing. Because of this, the threshold T is likely to be crossed, leading to a worse outcome as evaluated by C than if not everyone acts on C. A principle of secrecy S would then prescribe that C should be esoteric. But it seems that S must be part of C because the very justification of S is based on C! A possibility is now for C to be self-undermining. This will happen if, once S is figured as a component to C, the threshold is lowered to T² < T. This is plausible because secrecy encourages hypocrisy, manipulation, deception, and a range of other dispositions that can have adverse effects as evaluated by C itself. This may entail a principle of second-order secrecy S², which then must be considered as part of C. The regress reaches an end once we have Tn+1 = Tn. This does not imply that C must be fully self-effacing or the wrong theory. However, if we view morality also as a social phenomenon that implies coordination and cooperative issues, then a consequentialist has strong reasons to commit to CU rather than to C and its secrecy principles.
[1] Derek Parfit, Reasons and Persons (Oxford University Press: 1987 [1984]), especially pp. 41-3.
[2] It is easy to apply the same reasoning for other cases, for instance breaking a promise or tolerating some level of corruption.
[3] Parfit (R&P, Section 13) has a discussion of “collective consequentialism”, but the latter is not the same as CU.