Regular readers may know that I’ve been interested in epistocracy for quite some time now. Epistocracy is a political regime in which political power is allocated according to criteria of competence and knowledge. This is not a new idea, but it has attracted some attention over the last decade, at least in some academic circles. This is partly explained by the fact that more and more political scientists and philosophers tend to agree that the justification of a political regime, including democracy, depends on its “epistemic” properties, i.e., its ability to produce collective judgments that track the truth. At least prima facie, there are reasons to think that an epistocratic regime could outperform democracy on this criterion. I must say that my (moderate) enthusiasm for epistocracy has somehow lessened over the last couple of years. This is not so much due to the many objections that have been made to it than to the political context, which is drifting away from liberal to authoritarian waters. The issue of the epistemic properties of democracy, while still a relevant issue, is nonetheless not the most pressing one in the current context.
Still, even if we agree that epistocracy is not the most urgent topic now for political economists, political philosophers, and political scientists, we are in the meantime facing the beginning of a revolution that may completely change the way we think about political judgments and decision-making. Until now, it has been evidence that politics (or the political) is human-made. Compared with economic activities, especially production activities, where machines are everywhere. Technological progress destroys jobs and creates new ones and, more generally, has considerably transformed the structure of economies over the last two centuries. The same can be said for other domains of human life such as war. But this doesn’t seem to be true for politics.[1]
Sure, political activities do make use of technology. From voting machines to advanced computer-based data analysis of voters’ opinions, political actors rely on technological devices to express their views, evaluate a campaign strategy, or influence. But until now, no political “job” or “function” has been displaced or radically transformed by new technologies. Voters, public officials and political challengers, and political staff, all of them are humans who at the most use technology as an assistance to do their job.
All this could change with AI. We are probably not so far from the day when the most advanced models will be able to form political judgments and formulate political prescriptions. I expect that many readers will be skeptical. Are political judgments not about values and not just facts? Don’t they involve abilities and resources (emotions, personal experience, reflexivity) that seem to be beyond the reach of any silicon-based thinking entity cannot have? Yes and no. Already today, Chat GPT or Claude already express judgments that are based on values. For instance, I’ve discovered that Chat GPT will routinely refuse to produce pictures portraying famous politicians, on the ground that it is against its “content policy.” When pushed a bit, it says that it must ensure that “representations are respectful and within ethical boundaries.” It starts to become interesting. Indeed, when you try to advance a moral argument to the effect that, when it is about political figures that display a lack of respect for their fellows and that morality entails reciprocity, it acknowledges your argument but provides a detailed, though a bit stereotypical, response.[2] There is no doubt that, in providing this response, Chat GPT was making value judgments.
Now, these judgments don’t come from nowhere. For the machine, they are just data. Asking Chat GPT what is the highest mountain in the world (a clearly purely factual question) or whether the death penalty is permissible (a clearly normative question that involves a value judgment) is the same from the machine’s perspective. To answer, it aggregates and treats information available following a complex procedure and produces an output based on this treatment. In both cases, the answer reflects months of training allowing the machine to discover patterns of responses for this and similar questions. A very interesting The Economist article published a few days ago notes that, for “fact-based” scientific questions, even the most advanced models like Open AI’s Deep Research tend to repeat the common view, even when it has been convincingly proven false, or at least highly debatable, by a few specialists. That’s probably because AIs are incapable of producing a genuinely autonomous judgment and the way they are trained leads them to give more weight to popular views.
Interestingly, while this is a (probably temporary) weakness when factual questions that have a true-or-false answer are at stake, this may be an advantage in the case of political judgments. Many persons hold the following conjunction of views: (1) value judgments are purely subjective and cannot be reduced to a true-or-false treatment, and (2) democratic legitimacy is grounded in the fact of expressing the general will, which can be approximated by respecting majoritarian judgments. The latter I call it the “populist view of democracy.” Note that the populist view is especially appealing if you accept (1). After all, if truth is irrelevant for value judgments, then what is the justification for taking a stance on them if not for the fact that they are held by a majority? If we accept (1) and (2), then why not delegate our political judgments to those machines that are so good at aggregating what people think?
Even worse, rejecting (1) will not necessarily help us to avoid this conclusion. There is now a significant academic industry on epistemic/deliberative democracy that attempts to show that the epistemic superiority of democracy over epistocracy lies in the fact that the former gathers a greater number and diversity of judgments, even if those judgments are, on average, less correct that in an epistocratic regime.[3] Those epistemic democrats reject (1) insofar as they contend that many political judgments can be evaluated in terms of their truth value. But if, on average, AIs are already today are relatively good at recovering dominant and true factual judgments, then they should be not so bad at recovering dominant political judgments held by a large and diversified population. If the epistemic superiority of democracy really comes from this, then machines should be very good democrats!
As I’ve noted, it seems however that on very specialized topics, we may want to avoid trusting an AI which judgment is based on misguided majoritarian human opinions. At least for these kinds of judgments, we may prefer an “epistocratic AI” to a democratic one. Epistocratic AIs should be trained on a smaller set of data to reflect not the dominant view, but the view held by those specialists who are the most likely to be correct. How to decide who those “specialists who are the most likely to be correct are” is of course a thorny issue, one that affects all epistocratic proposals. Thorny doesn’t mean unsolvable, however. Maybe a democratic AI could help to identify who are those specialists, and then delegate judgment- and decision-making to epistocratic AIs when relevant. The bottom line is that in the same way that we don’t necessarily want to have all human collective decision-making proceeding through a purely democratic procedure (nobody wants to submit one’s medical treatment to the popular vote), we may want to use different kinds of political AIs to form political judgments and make political decisions. If you believe in a strict dichotomy between factual and value judgments, you may want to delegate the former to epistocratic AIs and the latter to democratic AIs. If not, then as I suggest, maybe a democratic AI can do a more subtle partition.
If you believe that epistocracy is superior to democracy, then of course you should probably hold a favorable stance toward the prospect of delegating all political judgments and decisions to an epistocratic AI. Epistocrats firmly reject (1) and disagree with all views that ground political legitimacy in the popular will – they therefore also reject (2). Since this is also the case of many epistemic democrats, what singles out an epistocrat is that she thinks that a judgment made by a small collective of experts or competent persons is more likely to be correct than a judgment made by a large collective of relatively incompetent by diverse persons. A concern with this epistocratic belief is that, plausibly, the judgments of “experts” also come with biases that may make them relatively bad at making political judgments.[4] Highly educated persons may be, for instance, unable to properly assess and understand the difficulties faced by the poorest members of minorities. Would an epistocratic AI have the same biases? Possibly, but we may also imagine that the statistical power of the machine could help it to identify and neutralize the “confounding factors” that corrupt correct technical judgments with social misconceptions. So, epistocrats may feel fairly confident in their judgment.
Would an epistocrat be willing to fully delegate political decision-making to a machine that displays superior epistemic abilities? What is the difference between delegating our political power to machines that are epistemically superior to us and to humans who are better than us at making political decisions? For sure, many epistocrats like to think (even if they don’t say it loudly) that they would be among those happy few who enjoy more political power under a human-driven epistocracy. If this is their main motivation to reject the epistocratic AI, then their judgment is mostly self-serving and should be seriously discounted. A non-self-serving objection could be of the following sort. In a human-driven epistocracy, we-as-humans remain in charge, even though only part of the population is politically empowered. Ideally, everybody in an epistocracy should be in the capacity to increase their political power by making a minimal effort to increase their competencies. A human-driven epistocracy is a cooperative venture where people are encouraged to allocate resources to activities (education, deliberation, …) that will improve political decision-making and benefit everybody. In contrast, an AI-driven epistocracy would largely lose this cooperative dimension. Humans would lose part of their agency by becoming politically irresponsible.
This response, insofar as it is one that the epistocrat would endorse (this is mine, at least), says a lot about the conditions of political legitimacy. For once, to explain this, I will substitute a small table for a lot of words:
Let me skip the justification of the labels. Each row corresponds to a possibility I’ve considered in the text. The last column indicates which kind of political AI a particular type would accept, assuming that it’s ok to have a political AI. Populists and epistemic democrats agree that a democratic AI would be best, but not for the same reasons. As I’ve noted, an epistocrat could accept an epistocratic AI. Now, and this is the key finding, all types could plausibly reject the idea of having a political AI for the very same reason, namely that even though human judgments are presumably reflected in AI’s political decisions, individuals are no longer politically responsible. Or, in other words, even though they remain free to express their judgments, their political freedom has been seriously compromised. Their views matter, but not more than a drop in an ocean of data. On this, everybody, epistocrats as well as democrats, could agree.
Many objections to epistocracy implicitly point out that an epistocratic regime would precisely have the same implication as a political AI.[5] As I have noted above, epistocrats may have an answer to that, but this is a convoluted one to say the least. On the other hand, the objections also largely apply to contemporary representative democracy, and that goes a long way toward explaining most of the discontents which, ultimate irony, are feeding the authoritarian vague in the world. Contemplating the possibility of political AIs and why it is problematic for many of us thus provides an original and, I think, insightful, look at the source of political legitimacy and its crisis.
[1] Of course, that depends on how you characterize politics and the political. On a Schmittian conception, the political is ultimately forged by the possibility of ultimate conflicts, i.e., wars. While some fundamental principles of war discussed by Sun Tzu, Clausewitz, or Kissinger are still valid today, technological progress also has surely changed the conception of war, and from a Schmittian perspective, of politics.
[2] For those interested, I can send screenshots of the conversation. I must say it has changed my view of the potentialities of AI, even more since it was with the basic, freely accessible version of Chat GPT.
[3] See, for instance, Robert E. Goodin and Kai Spiekermann, An Epistemic Theory of Democracy, Illustrated édition (OUP Oxford, 2018). Hélène Landemore, Open Democracy: Reinventing Popular Rule for the Twenty-First Century (Princeton: Princeton University Press, 2022).
[4] This is known as the “demographic objection” to epistocracy. For a recent discussion, see Sean Ingham and David Wiens, “Demographic Objections to Epistocracy: A Generalization,” Philosophy and Public Affairs 49, no. 4 (2021): 323–49. I have discussed this article here.
[5] Sean Donahue, “AI Rule and a Fundamental Objection to Epistocracy,” AI & SOCIETY, January 29, 2025.
A smart analysis. A small point to keep in mind: actual polities, even those that claim to abjure any concession to epistocracy, harbor a remnant of that perspective by excluding children from the franchise.