Esoteric Morality Versus Public Reason: The Case of Risk-Taking
I have been thinking about writing a post on the topic of “esoteric morality” for at least two weeks now. An esoteric morality can be conceived as a set of moral principles that are secretly known and applied only by a small subset of the population. This goes against the widespread idea – found in Rawls’s writings but also endorsed by many others – that morality depends on a form or another of public justification addressed to everyone. The FTX bankruptcy story that is everywhere in the news has been the trigger to finally write this post, as it illustrates some of the problems underlying the claim that defining and doing the good and the right may be the business of some enlightened few. I present in this post the idea of esoteric morality and discuss one specific issue that arises in the context of decision-making under risk.
Let’s start by making the concept of esoteric morality more precise. The exposition of the general idea can be found in Henry Sidgwick’s The Method of Ethics, especially in a well-known paragraph:[1]
“Thus, on Utilitarian principles, it may be right to do and privately recommend, under certain circumstances, what it would not be right to advocate openly; it may be right to teach openly to one set of persons what it would be wrong to teach to others; it may be conceivably right to do, if it can be done with comparative secrecy, what it would be wrong to do in the face of the world; and even if perfect secrecy can be reasonably expected, what it would be wrong to recommend by private advice or example... Thus, the Utilitarian conclusion, carefully stated, would seem to be this; that the opinion that secrecy may render an action right which would not otherwise be so should itself be kept comparatively secret; and similarly, it seems expedient that the doctrine that esoteric morality is expedient should itself be kept esoteric.”
Let us ignore the self-defeating character of the (public) claim that “the doctrine esoteric morality is expedient should itself be kept esoteric”. Sidgwick goes as far as to suggest that in some cases, one should keep for herself a recommendation based on Utilitarian principles, “even if perfect secrecy can be reasonably expected”. In their book on Sidgwick’s ethics, Katarzyna de Lazari-Radek and Peter Singer illustrate this claim with the example of a surgeon who, while she is about to do brain surgery on a patient who she knows she can save for sure, also has the possibility to harvest the organs of this patient to save four others who would die for sure otherwise.[2] They argue that in this kind of case, the utilitarian recommendation that the good thing to do is to kill one patient to save four is indeed correct, conditional on the fact that the act remains perfectly secret. Letting the act become public would undermine the trust that patients must have in their surgeons and would overall do more harm than good. Of course, the Utilitarian principles that permit such an act and the principle that states that the act’s permissibility depends on the act remaining secret much themselves be kept secret.
This example and the case of absolute secrecy are of course extreme. But they reflect the general idea lying behind the concept of esoteric morality. The knowledge of some class of acts and the principles justifying them may have adverse consequences according to the very standards of the theory that ground these principles. The theory, therefore, states that these acts are good if and only if their knowledge and the underlying principles are restricted to a subset of the population such that overall, these acts maximize the good. In extreme cases, the subset reduces to one person, i.e., the person who is acting. Most of the time, however, the subset is larger and includes individuals with particular “moral competencies” thanks to which they accept and abide by the principles, and up to the point where extending the subset further would have self-defeating implications.[3]
There are natural affinities between consequentialist theories and the idea of esoteric morality, though that does not mean that every consequentialist theory must accept it. This natural link is due to the fact that in most versions of consequentialism, what is right and wrong is contingent on facts. This contingency includes who knows what in given circumstances. Because so-called “deontological” theories tend to reject this contingency between the right and facts, they de facto reject the idea of esoteric morality as meaningless, or just contrary to fundamental principles, such for instance the respect for persons principle. If an act is right, it is in virtue of properties that are not dependent on facts. Note however that because purely deontological theories may be hard to defend, most moral accounts are in practice susceptible to be confronted with the issue of esoteric morality. That’s why many theories impose a publicity requirement, as this is the case of all theories relying on the idea of public reason. According to theories of public reason, moral justification is inherently public in a well-specified sense. A moral code doesn’t determine the right and the good if it cannot be justified to all members of the population to which it applies, at least under some ideal conditions.
While most persons’ well-considered intuitions will lend some support to the publicity requirement following the idea of public reason, the latter also has its problems. So even though it is highly counterintuitive, esoteric morality should not be automatically discarded. Now, there nonetheless are strong reasons to be wary of it. I will contend myself with one in particular that I’ve not seen discussed and that is related to risk-taking. The FTX story provides an important illustration of the kind of problems that may arise in this perspective if esoteric morality is taken as a moral principle. As it has been widely documented, Sam Bankman-Fried, the founder of FTX, has been largely involved in the effective altruism movement. His company has been massively funding projects through the FTX Future Fund, whose board figured several prominent contributors to effective altruism and longtermism. Even though we are far from knowing all the facts, a possibility is that Bankman-Fried may have made use of investors’ money by taking more or less irresponsible risks to support another company and to support projects through the FTX Future Fund.
This illustrates a serious problem with the idea of esoteric morality when moral decision-making is made under risk, which is of course the rule rather than the exception. Even though effective altruism and longtermism are not reducible to utilitarianism, there is a very tight relationship between this social movement and forms of consequentialism that can easily lend to the claim that the right thing to do can depart from common sense morality. As I’ve noted in my review of Will MacAskill’s book on longtermism, effective altruists and longtermists have a strong tendency to downplay the role of the state, and further the importance of public justification as a moral requirement.
Based on those considerations, assume that some consequentialist theory T recommends a set of acts A because they maximize the expected value of the good (according to some largely accepted measure). The fact that we are reasoning in terms of expectations is important. This indicates that we are in a context of risk or uncertainty. For simplicity, suppose that the distribution of risks is commonly known in the population. Now, while A may have the highest expected value, there may be a (very) low probability that acts A lead to a disastrous outcome as evaluated by T itself. Assume that there is an alternative set of acts A’ that avoid for sure such a disastrous outcome, but with a significantly lower expected value.
Now, following the idea of esoteric morality, A is the right thing to do because it is recommended by T, even though a large majority of the members of the population may hold a different judgment. More significantly, it may even recommend that the large majority of the population should not be aware of A (and presumably, of T), despite the fact that they may be affected at the margin by the choice of A instead of A’. An objection naturally emerges. While we may eventually agree that secrecy is justified in the case individuals are not directly affected, this seems hardly to be so when they are. We may even go further. When risk-taking is involved in moral decision-making, it is doubtful to argue that the attitudes with respect to risk in the majority of the population can be ignored. Indeed, according to expected value theory, the comparative assessment of lotteries (probabilistic distributions of outcomes) incorporates risk attitudes. In other words, what is normatively relevant according to this theory is the expected utility of options, not their expected value.
A consequentialist theory T that will try to account for risk attitudes and measure the good in terms of expected utility will thus face the difficulty that it is hardly possible to determine what the “best” risk attitudes are.[4] In situations of moral risk and uncertainty, esoteric morality licenses some persons to consider their risk attitudes as being authoritative. Crucially, that some individuals may have privileged access to technical knowledge allowing them to have a better assessment of risk is not relevant here. The question is rather to what extent should we be ready to incur risks in moral decision-making.
These considerations are relevant beyond effective altruism and longtermism. Consequentialist views may for instance lead to the conclusion that climate engineering is justified or that non-democratic forms of political regime are preferable. Utilitarians and other consequentialists may be tempted to defend these options as part of esoteric morality. These are however both cases where decisions incur significant risks – not necessarily significant in terms of probability but in terms of the badness of the worst possible outcomes. Even if we agree with the claim that, in some cases at least, it may be best to delegate moral decision-making to individuals endowed with the relevant capacities and in possession of the required knowledge, it remains that the willingness to take risks cannot be completely outsourced to esoteric morality. Secrecy runs also against the fact that moral behavior largely takes place in a setup where moral agents monitor each other, which is especially the case when risk and uncertainty prevail. In other words, while a subset of the population may be granted moral authority to make risky decisions for everyone else in specific contexts, this authority must be publicly justified and amenable
[1] Henry Sidgwick, The Methods of Ethics, (7th edition, Hackett: 1907 [1981], pp. 489-90).
[2] The Point of View of the Universe (Oxford University Press, 2016, p. 297-8).
[3] More formally, let K(n) the number of persons who know about some act A. Let T be a theory that states that A is best in circumstances C based on a principle P, as long as K(n) is below some threshold K(n)*; if the threshold is reached, then some other A’ is deemed better than A according to P. T therefore recommends that A should be kept relatively secret. Let’s call this the “secrecy principle” S, which is itself part of T. Then, according to the same principle P, the number KS(n) of persons knowing S should remain below some threshold KS(n)*; in the contrary case, some other act A’’ is deemed better than A according to P. Let’s call this the “second-order secrecy principle” 2S. Continuing the reasoning, we can imagine reaching a stage where if the number of persons KT(n) knowing theory T, that is P, S, 2S, and so on, is above a threshold KT(n)*, T recommends an act A’’’ that is not best according to itself in circumstances C. In this case, T would be self-defeating.
[4] Of course, this difficulty is compounded in case there is no agreement on the probabilistic distribution of risks, and even more, if a recommendation should be based on subjective probabilistic assessments of the likelihood for different moral theories to be true, as proposed by MacAskill and other effective altruists.