Economics, AI, and the Anthropology of the “Digital Society”
What Is It To Be a Person in the Data Era?
AI seems to concentrate nowadays most of our anxieties – even more than climate change, even though the latter’s threat is far more urgent and concrete. There is of course the now mundane doomsday scenario of a “superintelligence” taking control of our lives and our society. Less apocalyptic but still speculative, there is the fear that AI will take most of our jobs. A related concern is that the concentration of AI technologies into the hands of a bunch of big companies will politically alienate democracies by controlling the distribution of information and degenerate into oppression. And there is the worry that AI will “counterfeit people,” in particular by mastering language and producing culture.
This last worry has interesting philosophical roots. It is indeed not just by random luck that it has been expressed by Daniel Dennett and Yuval Noah Harari. Eric Schliesser has an interesting post discussing the connection between Dennett’s intentional stance theory and his pessimistic outlook on the prospects of AI. One of Dennett’s core philosophical contributions is his account of intentionality and agency according to which intentional states and their meaning are uncovered by taking the intentional stance. It consists of explaining and predicting the behavior of an entity (a human being, but also an animal or a machine) but ascribing to it intentional states such as desires or beliefs. We, as humans, routinely take the intentional stance to interact with other intentional systems – as Eric puts it, the intentional stance refers to an ordinary cognitive practice that we have evolved.
Dennett’s account has sometimes been interpreted in instrumental terms. In this reading, the intentional stance is merely a device to predict the behavior of other agents without opening the “black box” of their intentional system. This is not the case, however. A pure instrumentalist reading of Dennett’s account would indicate that belief attribution is merely based on a falsifiable theory of the mind and that we should give it up provided we can show that this theory is unnecessary (with respect to some parsimony criterion). This actually corresponds to the eliminativist’s position that Dennett explicitly rejects. The intentional stance is more than a predictive device. Consider indeed hypothetical Martians who are observing Humans and are trying to predict our future on the basis of superhuman abilities making them equivalent to Laplacean super-physicists:
“Our imagined Martians might be able to predict the future of the human race by Laplacean methods, but if they did not also see us as intentional systems, they would be missing something perfectly objective: the patterns in human behavior that are describable from the intentional stance, and only from that stance, and that support particular generalizations and predictions.”[1] (Dennett 1989, 25, emphasis in original)
Dennett’s point is that the intentional stance is not merely instrumental; it is the only way to observe real behavioral patterns. Dennett’s intentional stance is thus not only methodological, but it is also ontological. There is thus nothing more in the fact that entity E has the belief that b than the fact that E’s behavior can be interpreted and predicted (by E itself or others) from the intentional stance through the ascription to E of the belief that b. This is a form of realism, though a mild one since in many cases one’s mental states will be partially indeterminate from the intentional stance.
Now, something that is sometimes underestimated but on which Dennett is explicit, is that the intentional stance is tightly related to “already existing disciplines as decision theory and game theory, which are similarly abstract, normative, and couched in intentional language.”[2] This aspect of Dennett’s account has been dealt is detail by the economist Don Ross.[3] Ross argues at length that the way economists, and more particularly revealed preference theorists, have formalized economic agency in terms of choice consistency is nothing but the application of the intentional stance. In other words, choice-theoretic tools on which economists routinely rely are also ways to take the intentional stance and ascribe intentional states to economic agents.
Ross claims that this use of choice-theoretic tools is a constitutive part of what he calls “neo-Samuelsonian economics” (NSE). NSE is characterized by a particular conception of the economic agent that can be summarized in the following way:[4]
· Choices are the result of latent psychological dispositions.
· Mental states (beliefs, desires) are identified and interpreted by an observer (the choice theorists) based on choices.
· Choices are sensitive to variations in opportunity costs at the margin.
· Economic agency and personhood are disconnected.
This is an important feature of Ross’s NSE that economists, at least when they do positive analysis, are not interested in persons, but only in economic agents, that is, patterns of choices that are consistent with the axioms of choice theory. The person literally disappears behind the choice data. This is even more the case than in general economists are interested not in individual choices per se, but rather in some aggregate level of choices.
Why is that relevant at all for AI-related issues? In Homo Deus,[5] Harari claims that traditional ideologies (liberalism, communism) are fated to disappear and leave the way for “dataism,” i.e., the ideology of data. Dataism is the ideology of what I would call the “digital society.” In the digital society, a significant fraction of social interactions take place through digital means. They are mediated by AI-based technologies, in particular algorithms. Consequently, the production, aggregation, and distribution of information in the digital society depend on such algorithms: how they collect the information, how they transform it, to whom they transfer it, and what use is made of it.
What is striking is the complementarity between dataism as the core ideology of the digital society and the neo-Samuelsonian conception of economic agency. According to dataism, there is no person, only data tracking patterns of choice behavior. In the digital society, what is valuable is the information about the mental states of economic agents, what they believe, and what they want. Dennett and Ross are both clear that while necessary, the fact of being an intentional system is not sufficient to count as a person. Both suggest that personhood is constituted by an ability to self-narrate one’s life, besides making choices that can be used to infer intentional states. Dataism is however built on the postulate that what makes possible the transition from economic agency to personhood is not relevant. In the digital society, your identity is reduced to your choices.
There is already ample evidence that AI-based technologies have tremendous capacity both in collecting and using personal data based on choices. More worryingly, they have a clear ability to frame, if not manipulate choices. Note however that beyond this concern (which connects well with Dennett’s and Harari’s fears), there is a more ontological lesson to be learned. If it is true that AI can potentially deceive and manipulate people by making them believe that there are interacting with persons, that seems to imply (in contrast with what we would like to believe about us) that who we are is largely based on what we do. After all, current AI technologies are only based on the information we are giving to them based on our choices – acknowledging that what we say also results from choices. There surely is a gap between economic agents, as dataism is conceiving us, and persons as we are conceiving ourselves. But this gap is not so big. The anthropology of the digital society surely contributes, in this sense, to the disenchantment of the inner world that is a characteristic of our era.
[1] Daniel Dennett, The Intentional Stance (MIT Press, 1989), p. 25.
[2] Ibid, p. 58.
[3] Don Ross, Economic Theory And Cognitive Science: Microexplanation (MIT Press, 2005). Don Ross, Philosophy of Economics (Palgrave Macmillan, 2014).
[4] For more developments, I will dare to refer the reader to an article of mine: Cyril Hédoin, "Neo-Samuelsonian Welfare Economics: From Economic to Normative Agency", Revue de philosophie economique Vol. 21, no 1 (2020): 129‑61.
[5] Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (Harvill Secker, 2016).