Aaron Ross Powell has an interesting essay about the arguments against the use of AI to create “intellectual” content such as art or pieces of writing. Powell identifies three such arguments:
“The most common arguments against the use of LLM technology—the chatbots like ChatGPT that produce text from a prompt, or the image generators like Midjourney that produce visual works—take a few forms. First, that these technologies depend upon learning from the work of human artists and writers, and those artists and writers weren’t compensated, or weren’t compensated fairly, for the use of their creations as training material. Second, even if they were, it would be wrong to use ChatGPT to generate text or Midjourney to generate images because doing so takes business away from human artists and writers.
What’s more, the resulting prose and pictures aren’t bad just because they’re so cheaply produced, or because they are patterned after the uncompensated knowledge and skill of humans, but also because they’re shameful. They lack the human element: the creativity and spark found in prose and imagery made by humans. Thus the person generating or using these knock-offs is triply wrong: he’s benefiting from stolen goods, he’s depriving a human of money, and he’s fundamentally misunderstanding the nature of the thing he’s consuming.”
So, the arguments are (i) people LLM technologies are unfairly using artists’ and writers’ creations; (ii) by using these technologies, people are destroying human jobs; (iii) the use of LLM technologies fundamentally alters the very nature of the creative goods produced and risks undermining their value. Powell notes however that the very same arguments could be (and sometimes have been) used to reject automation of the production of, say, bread. Despite this, artists and writers and writers don’t seem to bother consuming goods largely produced through automatized processes. The question is therefore what justifies such an asymmetry? Let’s consider some plausible answers.
Answer #1: Automation of the production of bread, chairs, or cars is now part of our economic ways of life. We are used to it and the economy has evolved in such a way that new jobs have been created to compensate for the initial destruction of jobs so that nobody currently leaving is harmed. This is not the case for contemporary artists and writers. Automation will affect these people’s lives and probably harm them.
This answer grounds the asymmetrical treatment in the fact that time goes forward. The past cannot be changed, and the harms suffered by people no longer living are not relevant. The same is not true for persons living now. This is obviously a weak argument. It implies that by the end of the 19th century, it would have been justified to stop automation, which would resulted in far lower material wealth for everyone living till now.
Answer #2: The knowledge required to make bread or to build a car is mostly explicit and formal, i.e., it can be communicated through symbols. This explains why early automation of these productive activities has been possible. The kind of knowledge at play in creative activities is far more tacit and LLM technologies can only mimic it imperfectly through a partial formalization. But there remains a part of this knowledge that is not captured, explaining why creative content produced by AI adversely affects the value of creative goods.
This answer supports argument (iii). Indeed, if we grant the claim, then the fact that consumers of creative goods may ignore whether they have been produced by an AI or a human creates a type of adverse selection. Unable to distinguish between a good produced by an AI and one produced by a human, most people would consume the former (better it is presumably cheaper to produce and thus sold at a lower price), with the result that the truly tacit form of knowledge that gives a particular value to human-produced creative goods would disappear from the market. One problem with this answer is that it somehow begs the question. What is the basis to claim that AI-produced creative goods have less value, or lack a form of valuable tacit knowledge? The fact that LLM technologies are able to produce creative goods that, from the perspective of most consumers, are indistinguishable from ones produced by humans, militates against this postulate. In particular, most creative goods are also experience goods, meaning that their value is mostly derived from the experience related to their use. If you’re experiencing two creative goods in the same way, the fact that one is produced by a machine or the other by a human is irrelevant. If, on the contrary, users’ experiences are different, then it is up to them to judge whether these differences are relevant or not. Bottom line: the adverse selection argument is less relevant for experience goods.
Answer #3: What LLM technologies are doing is formally theft because they are not doing anything else than using material produced by humans. This is different for the production of material goods because there are no property rights that are infringed.
This supports argument (i). Several remarks can be made. First, property rights, especially intellectual property rights are not “natural kinds.” They are institutional creations that reflect shared practices and beliefs about who gets to use what and at which conditions. That doesn’t mean that anything goes but that some system of intellectual property rights exists doesn’t imply that it is justified or legitimate. The evolution towards open access in science for instance reflects the growing belief that the system of intellectual property rights in the scientific domain should be revised. Second, there are ways to ensure that the use of LLM technologies will comply with existing property rights. Partnerships between newspapers such as Le Monde and companies such OpenAI are starting to be concluded and suggest that some kind of beneficial cooperation between AI and human creators is possible. Third, what LLM technologies are doing is basically the same as what human creators have been doing for centuries. To take inspiration from the work of others, sometimes to borrow or even to copy the work of others, in the perspective of producing something “new.” Let’s put it bluntly: most “creators” do not create anything new, they just reformulate what has been done by others – and yes, that may perfectly apply to this essay.
Answer #4: The social and economic consequences of automation in the 19th and 20th centuries have been essentially positive. There are reasons however to think that this will not be the case with AI in the 21st century. It is unclear whether the jobs destroyed will be compensated by the creation of new jobs. Moreover, while in the past automation has generated gains in productivity, it is far from clear that productivity gains in producing creative goods and the related benefits for society will be similar. Finally, the automation of the production of creative goods is dangerous because it opens the door to malevolent and even totalitarian practices, such as the manipulation of mass information.
Though speculative, some of the fears that are expressed here may be justified. But this answer probably underestimates the potentially huge benefits that can unfold from the use of AI for the production of creative goods. Humans and machines can happen to be complementary, thus improving the quantity and the quality of creative goods. We are surrounded by complete uncertainty. That’s why a regulatory framework is needed to make sure that adverse socioeconomic consequences can be kept to a minimum while taking advantage of the new technological possibilities.
None of these answers is fully convincing and therefore those who hold the asymmetry between the case of material goods and the case of creative goods still carry the burden of the proof. I think today’s fears of automation of creative activities oscillate between three stances. In many cases, they just reflect purely selfish considerations of persons scared to lose their source of income. The fact that these are selfish considerations doesn’t mean that they are irrelevant or unjustified, but they must be put into balance with the potential social benefits. In other cases, the opposition to AI reflects a broader Luddite stance that expresses a concern with the potential emergence of a society where humans will no longer work. As said above, this concern cannot be dismissed, but it must be addressed rationally without succumbing to panic. Finally, a less articulated stance corresponds to the fact that the growing importance of AI in the production of creative goods is a key step toward what I’ve called the “disenchantment of the inner world.” The rationalization of practices and thoughts that marks the emergence of modernity has entered a new stage with AI. AI strengthens the idea that the human brain is nothing else than a complex biological machine that can be emulated by a computer. Even more worrying, by “mechanizing” our thoughts and our creativity, we may fear that AI will impoverish our minds and turn us into machines properly speaking, in the same way that automation has contributed to the rationalization of society and trapped individuals into an “iron cage.” This last stance may be grounded on a misguided “romantic” account of pre-industrial societies and of human nature. It may nonetheless largely explain why it is difficult to accept that machines will one day maybe be able to create.