The Economics and Ethics of Substituting AI for Human Creators
A few weeks ago, a journalist for The Atlantic wrote an apology-sounding article for having illustrated one of his previous pieces with an image created by an AI art tool. Soon after a reader pointed out the trick on a social network, the news apparently quickly went viral, leading to an insane number of outraged reactions, including insulting personal messages to the poor journalist.
What is going on here? What can explain this kind of reaction? Let’s first try to characterize the phenomenon at stake to have a better understanding of the kind of issue it can lead to. Basically, what we have is a classic case of the substitution of capital for labor. If you’re using an AI to produce a piece of art instead of a worker (an artist), you’re increasing the quantity of capital with respect to the quantity of labor in your production function. There is nothing new here: such kind of substitution effect has been observed with every technological innovation. Technological innovation affects the marginal productivity of factors of production, leading producers to revise the tradeoffs between various factors. More specifically, it will generally make the use of technology relatively cheaper, inducing the substitution from labor to capital. This had happened with the mechanization of the agricultural sector, and then with the industrialization of Western economies. The development of AI is the last step of the same process, affecting “creative” activities which were until now thought to be immune to the substitution effects of technological innovation.
That technological progress and its substitution effects trigger concerns, criticisms, and sometimes violent reactions is not new. The so-called “Luddite” movement is a well-known illustration. It does not require that much empathy to understand why the people whose job is directly threatened by the innovation would like to alt it. But from a larger perspective, is there any reason to oppose the kind of substitution that AI art tools may generate? I see three such possible reasons:
1. It has economically adverse effects. The more obvious one is that it will increase unemployment, including over the long run.
2. It is unfair toward human creators as everything else equals this will negatively affect their bargaining power on the market without any relevant justification.
3. It is unfair toward human creators because AI art tools do not actually produce “art”, and so there is no (or there should not be) real competition between these tools and humans.
Reasons 1 and 2 are pretty standard and have historically been evoked in all situations where technological progress was threatening human jobs. Until now, the claim underlying reason 1 has proven to be essentially false. There is no clear-cut mechanism through which technological innovation must lead to an increase in unemployment over the long run within capitalist economies. This should be qualified, however. First, rates of unemployment do have slightly increased in most Western economies over the recent decades, and more significantly rates of employment have decreased over the same span. People are also in general working less today than 50 or 100 years ago. Second, we cannot obviously infer from the past what will happen in the future. During the 19th and 20th centuries, technological innovation transformed the structure of capitalist economies, displacing jobs from agriculture to industry and industry to services. It might be pointed out that it is not obvious where the jobs destroyed by the use of AI will be located in the capitalist economies of the future. Nonetheless, employment is only part of the story. Ultimately, the function of an economic system is to help individuals satisfy their needs and to live flourishing lives. It would be foolish to deny that technological progress has helped in this endeavor. Now, of course, we may have doubts that AI will make the same contribution to humanity as the invention of the steam engine.[1]
I have always found reason 2 problematic, at least if we subscribe to the idea that ethical beliefs must satisfy a minimal consistency requirement. Technological innovation, as any factor that affects one way or another a parameter of the economic system, will change opportunity costs and thus economic behavior at the margin. Hence, the allocation of goods in the economy must change in consequence. Except in the rare cases where authentic Pareto-improvements are possible, some will lose, and some will gain. The fact that some economic agents are seeing their situation worsening cannot be sufficient to view the new allocation as unfair. If that were the case, that would mean that the status quo has a privileged ethical status when we are normatively assessing some state of affairs. Why not? But then, if you subscribe to this view, you should be prepared to accept that the end of slavery was unfair to slaveholders. Even if we grant that the fact that one’s situation is worsening can be one relevant consideration in terms of (un)fairness, this is obviously only one among many others. So, maybe we can find others that support reason 2. For instance, it may happen that the owners of AI are already very wealthy economic agents, while those negatively affected were already in a relatively less comfortable situation. But even then, fairness does not exhaust all the range of normative reasons that may be relevant to assessing whether we should grant an economic change or not. Such reasons can be for instance beneficence (i.e., overall wellbeing) or liberty (to innovate, to freely choose the means to produce art, …).
We are left with reason 3, which is more peculiar to the case at stake. I see it almost as some kind of “perfectionist” view according to which the value V of some good G is constituted by the fact that it is the product of some human qualities and activities. Art, as a valuable good, cannot be produced by anyone else than humans because it is constitutive of the fact that something is a piece of art that it has been produced by a human being. Because of this, it is unfair that human artists are put in competition with AI, as the latter is not producing art. I think that this reason, under an obviously less articulated form, is the one that is behind many reactions in the unfortunate case of the journalist of The Atlantic. Even if we accept this argument, what I have said above about the fact that fairness is only one normative consideration among others is still valuable. I don’t find it convincing anyways. We may of course grant the idea that something has value partly because of the way it is produced. We can give value to a piece of art or to a handcrafted piece because of the amount of effort, expertise, sacrifice…, it has necessitated.[2] But this is not the only source of value in the large majority of the cases. The sources of value are typically plural and often incommensurable. It is hard to believe that value disappears just because it has not been produced by a human.[3]
While I was thinking about this post and this specific point, it crossed my mind that it is unclear how I would react if I were to come across an AI writing blog posts and academic articles. After reflection, I think I would be first amazed and then feel challenged. But ultimately, I think I would end up reading the pieces of the machine as I read any pieces written by a human. A good way to end this post is now to ask the same question to my readers: how would you react if you were to learn that this post has been written by an AI???
[1] And we could even see in AI an existential threat to humanity over the long term. But this topic of longtermism, I keep it for a future post!
[2] For instance, while I have an interest in chess and I’m routinely amazed by some chess moves played by humans, I’m less prone to have this reaction with chess matches played by machines.
[3] And even if that were true for artistic value, we are considering here products that, while artistic properly speaking, are essentially valuable for instrumental reasons.