The Future Matters. But How Much?
“Future people count. There could be a lot of them. We can make their lives go better”. These are the three basic claims that start What We Owe the Future, the recent book by William MacAskill, an Oxford philosopher. MacAskill is currently probably one of or even the most publicly known philosopher living. He has made a name of himself over the last decade – while being barely in his thirties – for his writings and initiatives related to effective altruism (EA). EA’s basic idea is that well-off persons have a duty to help less well-off persons, and they have to do it in the most effective ways, by allocating resources where they will save the most lives and make life better. The three basic claims that start MacAskill’s book are constitutive of a related but different thesis, called longtermism (LT). The book, therefore, offers a thorough discussion of LT, its foundations, and its implications, including what we should do as individuals and societies.
In a previous post about EA, I noticed that the very logic of EA seemed to indicate that a significant part, if not all, of our resources, should be allocated to prevent “existential risks” that are threatening to end humanity as we know it. This is a counterintuitive and hard-to-accept conclusion, considering the sufferings of so many currently living people. While MacAskill intentionally avoids using the notion of existential risks, this is nonetheless the core issue faced by LT: how should we allocate our resources, considering that the number of human beings and other animals that could live in the future is astronomically big compared to the number of human beings who have been living until now?
The book discusses several problems and strategies concerned with the quality and the number of lives of future people. There are two general and complementary approaches that are relevant in this context. We can first focus on improving the values and the ways of life of future persons. And we can put an emphasis on increasing the number of future persons by preventing the extinction, the collapse, or the stagnation of society. The former refers to strategies aiming at improving the trajectory of societies, the latter to strategies aiming at safeguarding civilization as long as possible. The exploration of these approaches confronts the reader with several thorny issues about the assessment of the expected value of actions, population ethics, and whether living beings (including non-human animals) have lives worth living.
MacAskill’s general outlook is fairly optimistic. He is convinced that the future can be a good place for living beings – though it can also turn out to be a disastrous one, or even without any life at all. He is also confident that we can have a tremendous impact on how the future will be, and that this century is in some way pivotal. We still have time to prevent the realization of very serious risks (climate change, nuclear or bacteriological war, non-aligned artificial general intelligence), as well as to change and improve our values. But the time span to do this may be considerably shorter than the common view may believe. LT thus indicates that we have a moral duty to allocate a significant part of our resources to improve and safeguard our civilization for the future.
Each time I reflect on LT and its implications, I feel uncomfortable. On the one hand, it is difficult to dispute the three basic claims of LT. They are each reasonable in their own right. If we accept them, then you cannot reject LT as such. You must accept the idea that the future morally matters and that, as far as practical reason is concerned, we have good reasons to make the future better. But on the other hand, the sheer logic of numbers makes LT implausible. If LT is true, and given reasonable expectations about the number of future lives, then the well-being of currently living persons is simply irrelevant. Now, there are ways to circumvent at least partially this problem. Surprisingly, however, MacAskill does not directly address the problem and the partial solutions to it. Throughout the book, he indeed refrains to make any quantitative assessment about the reasonable intertemporal tradeoffs, while in the meantime he is not shy about offering the reader probabilistic assessments of existential risks. There are other issues as well of course. So, instead of doing a detailed review of everything that is interesting or problematic in the book, let me for the rest of this post cherry pick the points that I find the most relevant.
The issue of value plasticity and lock-in
MacAskill discusses the issue of value plasticity and lock-in in two chapters of the book. This is the part I have found the most original and refreshing, also because it connects to issues that are not related to LT. In a nutshell, the values prevailing at some moment in a given society can be more or less “plastic”. Values are plastic when they are not definitely entrenched such that they can be changed through intentional and non-intentional actions. Quite the contrary, we are in a value lock-in when it appears difficult if not impossible to change the prevailing values. MacAksill especially focuses on the case of slavery and abolition as an example of a value problem that is significant (the abolition of slavery has considerably improved the lives of billions of persons) and contingent (abolition was not unavoidable and could not have happened).
One of the risks associated with artificial general intelligence is that it could accelerate the emergence of definitely entrenched value systems, without the guarantee that these systems would be morally good. MacAskill develops, on this subject, considerations about cultural evolution that are relevant beyond the risk of non-aligned AGI. They highlight in particular one of the challenges to which liberal societies are by their very nature confronted. One of the strengths of liberal societies is that they are turned, as John Stuart Mill had understood, toward moral exploration and experimentation. The institutions of liberal societies are calibrated to diminish the risk of value lock-in. On the other hand, to survive, liberal societies must lock in their foundational values. This “lock-in paradox” as MacAskill calls it offers a new and interesting perspective on the current crisis of liberal societies.
Neutrality
Chapter 8 of the book deals with difficult issues related to population ethics. MacAskill endorses what is sometimes called the “Total view”, i.e., the view according to which it is good to add one more life as long as the life is worth being lived. A well-known issue with this view is that it leads to what Derek Parfit has famously called “the repugnant conclusion”.[1] According to it, it is better to have a very large population with lives barely worth living than a world with a relatively small population with lives of high quality. Many reject the repugnant conclusion and thus the total view, but an increasing number of population ethicists are now keen on accepting them. The reason is that the alternatives are unattractive. The main alternative to the total view is what is sometimes called the neutrality thesis. It states that adding life is neutral in terms of goodness, even if it is worth living. This leads to the person-affecting view: you cannot improve goodness but by improving the life of a person who is already leaving, and therefore not by creating new lives. The problem with the neutrality thesis is that it has inconsistent implications, as MacAskill illustrates very clearly.[2]
There is another alternative though, the “critical level view”. MacAkskill does not reject it outright and indeed grants it a positive probability of being the true view. The critical level view states that to improve goodness, it is sufficient to add a life worth living. The well-being associated with this life must be above some critical level. MacAskill grants that this view is reasonable and contends then that under moral uncertainty, we should weigh moral theories according to their probability of being true. If you believe that the only two plausible views are the total view and the critical level view, then what you obtain is basically a “weighted critical level view”, where the morally relevant critical level is set by multiplying the critical level by the probability that this view is the true one (assuming that in the total view a life worth living has a level of well being at least equal to 0). For obvious reasons, MacAskill refrains to give any quantitative assessment of what the critical level could be. The result is particularly unhelpful because we are left with no indication about how much the future should be good so that it is worth adding lives. MacAskill broadly defends a natalist view in the book, arguing that making children is overall a good thing. He offers other arguments in support of it (in particular, that it lessens the risk of civilizational stagnation), but the argument from population ethics is unconclusive at best.
Moral decision-making under uncertainty
As the discussion of theories of population ethics indicates, MacAskill defends the idea that when making moral decisions under theoretical uncertainty, we should weigh moral theories according to the probabilities we ascribe them of being the true views. MacAskill has indeed written a whole book on the subject.[3] Now, I have not read this book, but there are obvious problems with this account. The most important is that it seemingly leads to an infinite regress. What is true for moral decision-making should also be true for the choice of theories of moral choice under uncertainty. Why not settle for a maximin or a maximax account? Or some sort of weighted rule? Maybe the objection is not so serious but there is still the problem that most of the relevant decision-making related to LT is collective, not individual. There is no chance that individuals share a common prior about the truthiness of moral views. And it is well-known that it can lead to cases of “spurious unanimity”.[4] In general, as Kieran Setiya puts it in his review of MacAskill’s book, it seems that the best that you can do is to settle on the view you consider the most likely to be true and to act accordingly while accepting the possibility of error.
The role of political institutions, especially the state
One category of actors is almost not mentioned by MacAskill: political institutions, and especially the state. This may seem hard to believe given the scales of the problems MacAskill is concerned with. We are talking about curbing climate change, preventing nuclear disasters, and implementing ambitious policies to make sure that AI’s values are aligned with ours. But though MacAskill is indeed at times taking examples where political institutions have been essential in solving the problem at stake (such as regulating the use of chlorofluorocarbons), he doesn’t seem to consider that they are among the key players in tackling the challenges that LT present us with. There is no mention of the UN, World Bank, OECD, or IMF in the whole book.
For those who are familiar with EA, this will hardly come as a surprise. The EA has been developed as an approach to poverty issues that is implicitly skeptical of the effectiveness of political action. This is indeed one of the main raison d’être of EA. Policies designed at the state or international level are not effective, or otherwise, EA would recommend that their role and influence be increased. True, not all effective altruists are skeptical about political institutions and most would say that EA is complementary to state initiatives. It remains nonetheless that the political economy and philosophy of EA are intrinsically individualist and mostly oblivious to the problem of collective action. This is obvious when one reads the last chapter of MacAskill books. Most of the examples discussed by the author are individual initiatives. Of course, EA and LT have gathered a large community of persons, from billionaires to modest workers, ultimately initiating a massive political movement with a significant potential of making a collective impact. But it is hard to believe that all the aforementioned problems will be solved just by encouraging people to choose their professional careers to maximize the amount of money they can give. A major blind spot of this book, and of the whole EA/LT community, are the issues related to collective decision-making, and primarily whether or not, and how, democratic regimes are the most susceptible to lead to the best decisions.
The role of risk-aversion
MacAskill discusses at length the issue of decision-making under uncertainty. This is understandable given the kind of problems at stake. He endorses the fairly standard expected value framework. In a large part, this is this framework that leads to the disturbing implications of LT. Because the number of possible future lives is astronomical, the expected value of preventing the emergence of a non-aligned AGI is immense, even though the probability that this risk effectively realizes is small. The point here is that MacAskill is here using the expected value framework in what I would call the “intuitive way” by assuming that the probabilities and the value of outcomes are independently determined.
Now, this is not how decision theorists and economists generally use the expected value framework. Since the work of von Neumann and Savage in the first half of the 20th century, axiomatic results have established how the expected utility (rather than value) of an action can be fundamentally related to the risk attitudes of the decision-maker. The basic idea is the following: the utility of an outcome y for a decision-maker depends on the willingness of the decision-maker to incur risk to achieve a better outcome x while possibly ending up with a worse outcome z. If you arbitrarily assign a utility level of 1 and 0 to x and z respectively, then the utility of y is just equal to the probability p of obtaining x in a lottery where you obtain z with probability 1-p, such that the decision maker is indifferent between receiving y and playing the lottery. In other words, outcomes and thus actions do not have (expected) utility in themselves, independently of the willingness of the decision-maker to take a risk.
Consider how it is relevant for LT. Suppose that you have the choice between three actions. Action S ranges over the short term, e.g., an immediate action to reduce present poverty. Action L ranges over the long term, e.g., an action that should contribute to aligning AGI to our values. Action N is doing nothing. We can assume that actions S and N are ascribed sure outcomes. Action L however, has an uncertain outcome. More specifically, action L will be useless unless the corresponding existential risk realizes at some point. Ignores all considerations related to time preference. Now, suppose that we have the choice of choosing between N, S, and L. We are sure that N leads to the worst outcome, so there is no point in choosing it. Should we choose S or L? The point is that if we choose L, it is as if we were opting for the lottery where the outcome associated with N obtains (in case the risk does not realize) with a very high probability and the outcome associated with L (in case the risk realizes) obtains a very small probability. When the decision problem is framed this way, we see that a relatively risk-averse decision maker can rationally choose the short-term action S because its effect is certain, contrary to the long-term action. By choosing the long-term action, you’re taking the risk that it is useless while taking the short time action you know for sure that it will have a positive effect.
Of course, by choosing S, you are actually taking a risk: that the existential risk realizes. In this sense, choosing S would not be rational (assuming that you prefer to avoid the emergence of a non-AGI in a morally locked-in world that reduces present poverty). But we should also weigh in our computation the low degree of confidence about the effectiveness of the long-term decision to reduce or suppress the risk of a non-aligned AGI, while we have a high degree of confidence in our actions to fight present poverty. The point, ultimately, is that it is not irrational to be risk averse. That makes it reasonable to prefer actions we are (almost) sure will be effective to actions we are very uncertain about their usefulness and impact. I’m unsure whether this is a way to recast LT so that it leads to less extreme conclusions or a simple rejection of LT as a framework for moral decision-making. In both cases, it’s unarguable that the future matters, but that cannot imply that we should give everything to it.
[1] Derek Parfit, On What Matters (OUP, 1984), Part IV.
[2] Suppose that you have a temporary health condition so that if you decide to conceive a child now, he will for sure have to live with chronic migraines (alternative y). Assume that the life of the child will still be worth living. The alternative is to not procreate (alternative x). On the neutrality thesis, x and y are identical in terms of goodness. Now, suppose that you have the possibility to delay procreation for a few weeks, in which case the born child will be in perfect health (alternative z). There is no doubt that z is better than y. By transitivity, that would also imply that z is better than x. But neutrality implies that x is as good as z. If you accept transitivity as a constraint on moral reasoning, then either the neutrality thesis is false, or you have to accept weird moral views (i.e., y is as good as z).
[3] William MacAskill, Krister Bykvist, Toby Ord, Moral Uncertainty (OUP, 2020).
[4] Philippe Mongin, “Spurious Unanimity and the Pareto Principle”, Economics and Philosophy, 2016, 32(3): 511-532.