Very short summary: I discuss Cass Sunstein’s recent article on the “AI calculation debate.” I agree with Sunstein that an omniscient AI is impossible, but I nonetheless argue that a “society of AIs” with a division of cognitive labor would probably be better at tackling the knowledge problem than humans.
I will start this essay with a confession. One of my most preferred TV series is not among the widely acclaimed and awarded popular productions that everyone talks about, but the lesser-known Person of Interest, starring QAnon-friendly actor Jim Caviezel. It centers on an artificial intelligence called "The Machine" that can predict crimes before they happen by analyzing surveillance data. Created for national security, the AI is secretly accessed by its creator and a former CIA agent to prevent everyday threats to ordinary citizens. While the premise might sound like standard thriller fare, the series raises profound questions about artificial intelligence, human agency, and predictive power—questions that have become increasingly relevant as real-world AI systems grow more sophisticated. In particular, the series asks whether and how to use this information, acknowledging that the AI can only act through the agency of human beings.
I was reminded of this series while reading a recent article by the legal scholar Cass Sunstein, “The AI Calculation Debate.”[1] Sunstein argues that an AI will never be able to predict events such as the result of a coin toss, who will win a presidential election in 10 years, or what will be the next musical hit. We could safely assume that Sunstein would say the same of the extra-human capacity of The Machine to detect in advance the kind of threats that the main protagonists spend their time thwarting in the series. Sunstein contends that this impossibility has nothing to do with a hypothetical inherent indeterminacy of the world but is rather grounded in the inaccessibility of the relevant information to make true predictions. In this sense, the limitations of AI are very similar to those of the central planner who wants to compute an optimal allocation of resources in the absence of market prices.[2]
When looking at the argument in detail, this inaccessibility has several origins. First is the fact that a large part of the relevant information is private. In the same way as the allocation of resources depends on consumers’ preferences and firms’ production costs —information typically inaccessible to the planner, social events are the result of a myriad of bits of data that even an advanced AI will not have access to. Not everything we think of or know can be inferred from our digitally-tracked behavior.
Second, the social mechanisms that determine the interactions between what we want and know and our environment, and how the latter influences the former, are opaque and not fully understood. Even with a rich data set, our knowledge of social mechanisms remains partial and incomplete. An AI faces similar limitations.
Third, social dynamics are largely determined by informational and preference cascades. For instance, what music you choose to listen to or which restaurant you choose to go is determined to a large extent by which music those in your reference group listen to, or in which restaurant they eat. More fundamentally, whether a song will become a future hit depends on the initial dynamic, i.e., whether or not it happened to be listened by a large enough number of persons to “get started.”
Finally, there is the well-known phenomenon of “social reflexivity,” i.e., the fact that individuals’ beliefs and expectations, eventually based on public predictions, influence social outcomes. In some cases, beliefs and expectations can be self-confirmatory (“self-fulfilling prophecies”), but in other cases the very prediction causes behavior that falsifies it. A public prediction, made by a human or an AI, becomes part of the relevant information to predict social events, and therefore increases the complexity of the social system. How people form their beliefs and expectations, and whether a prediction will be public enough to significantly influence these beliefs and expectations, are very tricky questions that cannot be answered without a detailed knowledge of social mechanisms and of individuals’ concrete situations.
These four elements combine to make social events impossible to predict, even if we assume that the world is deterministic. Perfect prediction would require a very large and fine-grained informational basis that even an AI cannot access. This is an important point, especially in the context of the political role that an AI could play to assist, if not replace, human-based public decision making (see my essay on political AI for instance).
“Simultaneous Contrasts,” Sonia Delaunay (1913)
I think Sunstein makes a very good case to reject the possibility of an omniscient AI predicting social events, as The Machine does in Person of Interest. Still, I think the argument has some blind spots that make it incomplete. Let me address some of them. First, consider the analogy with the socialist calculation debate. I see at least two differences with the AI knowledge problem. In the socialist calculation debate, the role of the planner is not so much to “predict” the allocation of the resources as to find the most efficient one. Prediction and efficiency are the same only under the assumption that an unplanned economic system would allocate resources efficiently. As far as I understand the Austrian argument against economic planning, it is precisely that the very notion of “allocative efficiency” in the Walrasian-Paretian sense is elusive. Market mechanisms are dynamic processes that have the property to adjust the allocation to changing circumstances. Quite the contrary, neoclassical economists like Oskar Lange were using the Walrasian framework as an analytical tool, assuming that if we can predict the efficient allocation of resources with it, then the planner can simply implement this allocation without market prices. For Austrians, the superiority of the market system is not that it helps to “predict” the efficient allocation, but that it comparatively minimizes the expected costs of miscoordination.[3]
This leads to another, but related aspect that diminishes the value of the analogy. In the socialist calculation debate, the comparative assessment of market and planning turned out in favor of the former. The superiority of the market precisely lies in the fact that it doesn’t attempt at intentionally solve the knowledge problem. The problem is unsolvable, as a matter of principle. Market institutions don’t try to compute what cannot be computed. They provide the best approach to allocate resources given knowledge limitations. In the case of AI computation problem, even if Sunstein is right that perfect predictability cannot be achieved, an argument can still be made that AI will (and already are) largely surpass the predictive abilities of humans and human-made institutions. Whether it will be economically or politically useful —or harmful— remains to be seen. Private companies are already using AI to predict and —another major difference with the market-vs-planning debate— to influence our behavior. There is already evidence that the AI technology is interfering with the working of economic and political institutions and granting significant power to its owners and most capable users. Power, rather than prediction properly speaking, is indeed the real issue at stake.
There is a second set of considerations unrelated to the calculation debate analogy that is relevant. Consider human-made (sets of) institutions like markets, democracy, or science. Their polycentric nature makes them relatively apt at dealing with the knowledge problem. The philosopher Michael Polanyi already made this point in the 1950s for markets and science.[4] These institutions tackle the knowledge problem by granting autonomy to individual and collective agents to solve local problems for which they have particular skills and knowledge. Polycentricity is not anarchy: agents compete and cooperate based on mutually accepted rules that regulate coordination problems.[5] As long as these coordination problems are appropriately mitigated, polycentric orders will be more effective at solving problems than monocentric orders because they are more able to explore the “search space” and more likely to allocate cognitive and material resources where they are most productive.
If humans can create (intentionally or not) polycentric orders, there is no reason that AI could not. In a previous essay, I made the related point that we should expect a division of labor between “political AIs,” each specializing in a specific set of problems. A single omniscient AI is unlikely or impossible for the reasons Sunstein discusses. However, in the same way that researchers organize in partially competing, partially cooperating communities focusing on different problems, we can imagine the likely emergence of a “society of AIs” with a more or less deep division of cognitive labor. The so-called Hong-Page theorem establishes that a population of mildly-competent but cognitively diverse problem-solvers is better at problem-solving than a population of cognitively uniform experts.[6] But, surely, a population of cognitively-diverse experts will outperform a similarly diverse population of mildly-competent individuals.
To give a concrete example, consider the case of betting (or prediction) markets. Betting markets are a good illustration of the Hong-Page theorem, with the difference that there is in this an explicit mechanism (market prices) that signals to all participants the current aggregate or collective belief. Not only can AIs use the information produced by betting markets, we can also imagine betting markets where the participants are AIs. Assuming that AIs have superior “cognitive” skills to gather information and turn it into valuable knowledge compared to humans, we can suspect that an AI betting market will also surpass the human-equivalent institution.
Now, this may turn out to be incorrect. The cognitive diversity of AI societies may be limited because, for instance, they more or less all learn the same way and are trained on similar data sets. If it is true that “diversity trumps ability” (as Hong and Page’s theorem is generally summarized), then AIs may not necessarily get the upper hand because they can’t divide labor as much as humans. This is highly speculative, to say the least. In light of our (limited) understanding of what we call “intelligence,” there is no clear reason why what applies to human intelligence should not for artificial intelligence.
We’ll probably never have an AI capable of perfectly predicting crime or other social events as in a fiction like Person of Interest. Intelligence, whether human or artificial, cannot solve computation problems that are unsolvable. That doesn’t mean that a society of AIs cannot surpass human societies at this task.
[1] Cass R. Sunstein, “The AI Calculation Debate,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, December 13, 2024), https://doi.org/10.2139/ssrn.5054402.
[2] The well-known “socialist calculation debate” is the reference. One of the most significant contributions to this debate is F. A. Hayek, “The Use of Knowledge in Society,” The American Economic Review 35, no. 4 (1945): 519–30.
[3] Of course, we could say that this minimization is what makes the market “efficient.” However, that is not in the sense of Pareto-efficiency that is used in the Walrasian framework of competitive general equilibrium. The notion of equilibrium itself is disputed by Austrians.
[4] Michael Polanyi, The Logic of Liberty: Reflections and Rejoinders, First Edition (Indianapolis: Liberty Fund, 1951 [1998]).
[5] A classic contemporary account of polycentricity is Paul D. Aligica and Vlad Tarko, “Polycentricity: From Polanyi to Ostrom, and Beyond,” Governance 25, no. 2 (2012): 237–62. See also W. Elliot Bulmer et al., Polycentric Governance and the Good Society: A Normative and Philosophical Investigation, ed. David Thunder and Pablo Paniagua (Blue Ridge Summit: Lexington Books, 2024).
[6] Lu Hong and Scott E. Page, “Groups of Diverse Problem Solvers Can Outperform Groups of High-Ability Problem Solvers,” Proceedings of the National Academy of Sciences 101, no. 46 (November 16, 2004): 16385–89.
Lack of information is hard to model when you don't know what information you are missing.
There's also a well-developed theory of complexity which says that some problems (NP-hard), although deterministic in principle, are in practice insoluble as they scale