Preliminary Note: The following is very speculative!
I’ve been writing occasionally on AI here, especially about how the advent of AI may change our conception of ourselves as agents (here, here, and here). These concerns are, I think, as important as those related to AI's economic impacts. Not that the latter are secondary. The AI-era economy is likely to have a “winner-takes-all” nature considerably affecting the distribution of wealth and (economic and political) power. The real impact on jobs is difficult to foresee and, if history teaches us something, it is that we humans are generally bad at predicting how technological change will affect the economic structure of our societies. Hence, we should probably not grant too much significance to the doomsday scenarios that predict a labor-less society. Basic economic reasoning is nonetheless enough to see that AI is likely to induce a substitution of capital for most types of labor, shifting income from the latter to the former. If AI fulfills its promises, it will become an indispensable tool to remain competitive, both for individuals and organizations. Those who own these tools will acquire tremendous bargaining power. Since the “market for AI” is unlikely to be perfectly competitive, significant “rents” will exist. Whether and how we want to regulate this is of course one of the central questions that will occupy economists and policymakers for the coming decades.
AI-generated (Chat GPT) illustration. What else?!
As I said, the economic aspects of AI should not be the only ones that are a cause for concern. The “cultural effects” may even be more significant over the long run.[1] As a follow-up to my previous writings, I would like to briefly consider two related effects that the unavoidable regular use of AI is likely to have on our social practices. I call the first of these effects the “uniformization effect ” and the second the “disempowerment effect.” Both can be seen as the expression of the kind of “hyper-modernity” that AI is likely to generate – “hyper” in the sense that it reinforces features of modernity that are already at play while giving them a new, and harder to cope with dimension.
Boardgames
To have a sense of what I mean by these labels, consider the impact of AI on board games like Go and Chess. Last year, the New York Times published an article on Go-champion Lee Saedol who, in 2016, lost a match against AlphaGo, an AI computer program developed by Google. As all experts will tell you, while the well-established superiority of computers over top-rated Chess players is largely due to the fact that machines excel at “brute force” calculations (i.e., computing as many variants as possible), Go was until recently preserved from computers because the game is so complex that brute force leads nowhere. Saedol’s 2016 defeat was a shock because it strongly suggested that machines are able to do something else than blindly computing variants. Saedol has since then retired and the NYT article strongly suggests that this is due to the fact that the defeat changed his conception of the game:
“Mr. Lee had a hard time accepting the defeat. What he regarded as an art form, an extension of a player’s own personality and style, was now cast aside for an algorithm’s ruthless efficiency.”
A few days ago, The Economist published an article by Chess top-ranked and multiple world champion Magnus Carlsen. Carlsen has started a feud with the international federation of Chess (FIDE) by launching a new competition of “freestyle Chess.” In this variant of the game, each game starts with randomly placed pieces. The reason why Carlsen wants to promote this new version of the game (and, presumably, spend more time playing it himself) is that he considers classical Chess to have been stripped of its creativity by the still-increasing importance of computers. The opening phase of a Chess match at the highest level mostly consists now in playing on the board sequences of moves that have been discovered with the help of computers. Only the best players in the world can say how much from those moves is due to computers and what role human skills really play, but the sense is that increasingly, in this phase of the game at least, humans became the agents of machines.
The introduction of (now AI-based) computers in Chess and Go has changed the way these games are viewed and practiced, at least at the top level. Though there are “good” and “bad” moves that can be ranked in terms of their efficiency or correctness, even for the best players it has never been obvious which those moves are. This is what makes these games so fascinating: the quest for the perfect way of playing is never-ending because it is beyond the human mind’s reach. That gives room for players to develop different styles of play, from Mikhail Tal’s risky approach based on speculative sacrifices to Raul Capablanca’s positional play. However, in Chess as in Go, there probably is a “one-best way” to play and computers are increasingly able to look for it. Over the board, humans are still facing the undecipherable complexity of those games and are still forced to look for creative solutions on their own. However, as now virtually all players learn the game with computers, we tend to see (at the highest level) a kind of uniformization of the way the game is played, especially in the openings. To give only one example, when I learned the game 25 years ago, “launching” the king-side pawns to attack the opponent’s king was considered a risky, bold strategy. Computers have helped to discover that in many configurations, this is not only a reasonable but the best strategy. Nowadays, you see this kind of move pretty often in high-level games, including from so-called “positional” players.
Chess and Go illustrate the effects of (AI-based) computers on social practices. Because they are so good are finding the best moves and strategies, players follow their recommendations and, ultimately, mimic them. Because everybody uses the same sources, this triggers a mechanism of uniformization that, progressively, erases the differences in terms of “style of play.” Also, we may increasingly ask who is to be credited for one’s brilliant play in the opening. The human who has made intelligent use of the machine, or rather the superior computing power on which the software is running? Computers are making practices more uniform but are also dissolving the weight of human agency, and thus responsibility, in the observed outcomes.
Uniformization
Social life cannot be reduced to board games, for sure. By nature, there are objectively optimal moves in Chess and Go. No such objective optimality exists in social life, or at least not to the same extent. It’s not only that social life is more complex than Chess or Go. It is also that there may be no objectively correct answer to questions such as “what is a good life?” or “what is the optimal tradeoff between security and freedom?” Hence, up to a point, we’ll not be forced to behave in a certain way as Chess players may be forced to play (or not) some moves if they want to win. This is however besides the point. Acute observers of democratic societies during the 19th century like John Stuart Mill or Alexis de Tocqueville worried about the uniformization pressures coming from social opinion. These pressures had nothing to do with the fact that social opinion was “correct” – quite the contrary, Mill estimated that they may prevent the emergence of socially benefitting innovations. As we are more and more using the same AIs that are themselves trained on the same set of data, we may expect a convergence of judgments, beliefs, and practices. Diversity would be undermined, not because AIs have found the “correct” answers, but because we all use them to form our beliefs or make decisions.
This is damaging in so far as diversity (of beliefs, of practices) is socially desirable. Now, a reasonable response is to note that the fact we are all using the same tools and technologies (cars, computers, phones) does not mean that we are using them for the same purpose or in the same way. Technologies are means to reach ends and social diversity is as much the result from the fact that we disagree on which ends are worth pursuing as from the fact that we disagree on the means and their use. Moreover, the supply side of the market for AIs will probably never be a monopoly. Different AIs will be available and we may expect that they will not deliver the same output, even when fed with the same input.
This is true, but only up to a point. The difference between AI and other technologies, even computers, is that they display intentionality. Already today, while we are still at the beginning of the AI era, large language models (LLMs) can reveal, through the conversations we have with them, intentions, beliefs, and desires that make them able to make statements not only about how to pursue an objective (how should I prepare my recipe given the fact that I miss a specific ingredient?) but also which objectives are worth pursuing. Even if AIs are wrong about this, or if there is no correct answer, they are nonetheless highly likely to be influential. A sustained ecological diversity of AIs would lessen the risk of uniformization. Even though AIs are essentially trained with similar data (with an increasing fraction of input generated by AIs themselves), their algorithms will differ. Also, we may expect AIs to specialize, even when genuine artificial general intelligence will emerge – this is after all what humans do.
We should therefore probably not overestimate the risk of uniformization. But this tendency nonetheless exists, in the same way as it exists in democratic societies as characterized by Mill and Tocqueville. At least, we may hope that AI will uniformize through improvement, while Mill and Tocqueville were more concerned with uniformization through mediocrity. Improvement (e.g., in quality and quantity of the output) is after all what we expect from the use of technologies. If I use a software for producing my econometric analysis, there is no doubt that this improves the speed and the quality of my work – actually, it even makes the very analysis possible.
Disempowerment
The software example illustrates however a very important difference between AI and other technologies. When I use a software or any other similar technology to do something, to produce an output, this output is unambiguously mine. What the econometric software does are mathematical operations that I could have in principle done myself. The software doesn’t add anything that I couldn’t have done myself, at least in theory. Indeed, before using statistics software, we learn statistics so that we perfectly understand what the machine is doing. This is the way humans generally work with technology. Technologies expand our abilities (a car allows us to move faster, a computer to calculate better and faster) but don’t create new ones strictly speaking.
Things are arguably different with AI. It is largely established that nobody exactly knows how LLMs are generating their outputs. These AIs are trained by humans who choose the data they feed them, but their inner operations largely take place within a black box. To write this essay, I could have asked Chat GPT or Claude to propose arguments, examples, or a structure.[2] I could have even asked them to rewrite some paragraphs or simply to write them from scratch. Would this text be mine as in the case of the statistical analysis conducted with the help of a software, or is it not already a co-authorship shared by the AI with me? A major difference is that in the AI case, I don’t know how exactly the AI has generated its output. As far as I know, this output may be genuinely new, an authentic creative bit of knowledge that didn’t exist until now and that cannot be entirely reduced to the input with which the AI worked.
Skeptics will likely make two objections. First, the initial trigger is the human who is prompting the AI to answer queries. Second, the AI works with data produced elsewhere and which has been generated by the human brain. After all, LLMs are “merely” statistical machines that compute the highest probability for a word or sentence to be followed by another specific one, based on the data it has access to. The former objection surely makes a valid point, but note that many human cooperative production activities work like that – a human “prompting” another human to produce an answer to a query. The latter objection is weak. It is reminiscent of the Marxist idea (error) that, since capital goods have been produced by human labor, only human labor creates value. Also, most human activities, especially creative ones, don’t differ substantially from the way AIs are using data. How many paintings or books are “inspired” by other paintings or books and consist of elaborating from and extrapolating data produced elsewhere?
The bottom line is that AI is, and will increasingly in the future, be the source of ambiguity about the reach of human agency. Humans are agents in the technical sense of the word, i.e., entities endowed with intentional states (beliefs, desires, intentions) that interact with their environment and have the capacity to causally affect it. More and more, AIs will also fulfill the conditions to count as agents, even diminished ones. As our respective intentionalities are mixed, it will become more and more difficult to determine what is ours and what is theirs. This leads to very concrete questions, some of which have triggered me to write this essay. For instance, up to what point can we consider that a student’s discussion paper should really be attributed to the student, her abilities, her effort, and her knowledge?
You can circumvent the problem by arguing that whatever has a human input (for instance, prompting the machine) is (agentively speaking) from the human who provided the input. That may do as a heuristic in specific contexts. That doesn’t answer however the deeper philosophical problem of empowering and responsibility. Let’s put aside all the Terminator-like scenarios where a single autonomous AI takes full control of the world. Let’s also set aside more plausible (and likely to become true in a more or less distant future) scenarios of general AIs able to reprogram themselves autonomously and to “choose” their ends. If we only consider the very likely case of specialized AIs displaying superhuman abilities to solve well-specified sets of problems, we already face the possibility that humans will increasingly be discharged from the responsibility of the outcomes generated by AIs. This is a scenario like this that Henry Kissinger alludes to in his last book written with two AI engineers from Google and Microsoft.[3] In a chapter dedicated to the impact of AI on politics, the authors write,
“Unprecedented information processing will enable truly efficient centralization of policy by AIs. One might expect this to reinforce the perception of control by elites. However, the opacity of these systems – and the notion that their operation may be optimized in the absence of human interference – will work in the opposite direction. It is possible that, with time and experience, human control may come to seem less a necessity than a burden. Even as it might initially have felt terrifying for eighteenth-century European leaders to surrender control to the invisible forces of human self-interest, the political leaders of the twenty-first century may yet be required once more to humble themselves before a system that incorporates the wisdom of the masses in an entirely new form.”[4]
Earlier in the same chapter, the authors speculate that the advent of AI may permit merging “theoretical and political wisdom,” i.e., the scientific knowledge that helps to determine the best way to achieve ends and the practical knowledge of how to balance competitive ends.
We are not there yet, but at the speed at which AIs are improving, this is no longer a science-fiction scenario. I’ve written recently on the “mass effects” that come with modernity. Modernity comes with the realization that the natural and social worlds are subjected to a complex system of impersonal forces. It also comes with social changes that lessen all individuals’ ability to effectively alter the course of the world and push them to retreat to their private sphere. This is part of a broader “disenchantment” that pictures the world as a rational, cold, value-less, place where individuals' direct and indirect power to act on it on their own is reduced to the minimum. Kissinger et al. describe a scenario where individuals are even more disempowered, even in politics. The most likely outcome is a disappearance of any sense of responsibility without which political freedom cannot be sustained for long. Even if AI “incorporates the wisdom of mass,” human agency will almost count for nothing. On the other hand, since we are considering realistic scenarios, we still have to assume that AIs are owned and controlled by some individuals and companies. We have circled back to the beginning of this essay, where I observe that those owning AIs would obtain tremendous economic, but also political, power.
Final Remarks
On a more positive note, what I’ve called in the title the “political tragedy of AI” is not necessarily meant to happen. It’s just a possibility due to a tendency that the advent of AI creates in modern societies. Countervailing forces may prevent them from following this path. Also, we should bear in mind the many benefits that AI will bring. Overall, I think our attitude with respect to AI should be the same as Tocqueville’s regarding democracy. Nothing will stop the development of increasingly powerful AIs. We should accept and even welcome and be prepared to take the most advantage of this. We should however also be mindful of the broader effects on our economic system and, even more importantly, on our cultural framework. In his times, Tocqueville rightfully saw that the advent of democracy was opening the door to new forms of tyranny. Today is the same but different.
[1] For sure, the cultural and economic effects of AI are related, if only because the way AIs are effectively produced and used for economic purposes will have large impacts on our social practices. Views about the cultural effects of AI may thus depend on which economic scenario one judges the most plausible.
[2] By the way, this essay is certified to have been written without direct or indirect assistance from AI.
[3] Henry A. Kissinger et al., Genesis: Artificial Intelligence, Hope, and the Human Spirit (New York: Little, Brown and Company, 2024).
[4] Ibid., p. 97-8. My emphasis.
Excellent. Three remarks.
1. I re-read the third chapter of Mill’s On Liberty yesterday and was struck by a passage on why we don’t (and shouldn’t) want to delegate decision making , craftsmanship and our agency generally to automata. A foreshadowing of Nozick’s experience machine and also probably echoed in a different by Marx on automation.
2. Your two concerns remind me of James C Scott’s work, especially Seeing Like a State. There are interesting analogies between AI centralization and the emergence of states—both make us more legible, artificially so of course, and we make ourselves more legible to them. In the process we give up on agency, diversity, and local knowledge (métis for Scott).
3. Partly inspired by Scott, C Thi Nguyen’s work on gamification and value capture shares your two concerns as well.
(Again, Mill was right about the need to allow for many different kinds of experiments of living, as a corrective to the sort optimization you and Nguyen are worried about.)
This is a great piece. I'd love to chat about it with you on my philosophy podcast if you're interested. If so, DM. Either way, good article!