AI and bureaucracies (yesterday's intelligent machines)
Assistant Professor Bernardo Zacka is a political theorist whose research focuses on the normative challenges that arise in the course of public policy implementation. He is particularly interested in understanding how the organizational environment in which public officials are situated affects their capacity to operate as sound and balanced moral agents. He is the author of When the State Meets the Street: Public Service and Moral Agency (Harvard University Press, 2017).
Q: How can political science inform our thinking about the societal implications, benefits, and risks of AI?
To someone who studies bureaucracy, the anxieties surrounding artificial intelligence have an eerily familiar ring. So too does the excitement. For much of the 20th century, bureaucracies were thought to be intelligent machines, with all the positive and negative connotations the term carries.
As we set out to examine the ethical and political implications of artificial intelligence (AI), the comparison with bureaucracy is instructive. While AI’s full reach still looms on the horizon, bureaucracy is the devil we know. There are at least two lessons to draw from the history of our engagement with it: that it is worth studying our anxieties — whether or not they are realistic, and that in doing so we should be mindful not to write off human agency too quickly.
At the turn of the 20th century, bureaucracies embodied the promise of automated decision-making, as AI does now. Like machines, bureaucracies were meant to function according to clear rules. This enabled them to be more accurate, more reliable, and at times more efficient than traditional forms of organization.
Bureaucracies carried, moreover, a moral promise: that of freeing us from the potential arbitrariness of human judgment. Bureaucracies could be biased, but they were at least noise-free: Presented with the same case twice, they were designed to reach the same decision. What is more, they functioned according to principles that could be clearly stated, and thus possibly contested and revised. They were, in this sense, transparent.
But that of course is not the whole story. A hundred years later, bureaucracy has become a term of insult. Without the tempering influence of human judgment, the automatism of rules can be blind, dashing any hope of flexibility.
Bureaucracy, moreover, raises a specter more menacing than the arbitrary rule of somebody: the arbitrary rule of nobody. To whom should one direct grievances? To the clerks? But their only crime is to follow rules. To those who came up with the rules? Yet, like programmers designing complex algorithms, they stand at such a remove from the point of application that they may not be able to foresee the consequences of their choices. The problem is not so much that we cannot hold anyone accountable — it is always possible to punish someone. The problem is that accountability is decoupled from actual responsibility.
And that is not all. If bureaucratic rules are to yield sensible guidance in a wide range of cases, they must become complex and intricate — so complex, however, as to make a farce of the ideal of transparency. As designers of autonomous vehicles are now discovering, the more intricate machines are, the harder it is to understand how they work, and the more arbitrary their behavior may seem or perhaps indeed be (for how could one tell?).
Now, of course, the similarities between the intelligent machines of yesterday and today only go so far. Modern AI systems are able to perform two tasks that bureaucracies notoriously struggled to do: They learn and they adapt.
Bureaucracies owe whatever intelligence they have to the procedures they are instructed to follow. Their intelligence is in this sense extrinsic, and brittle, for it could be rendered obsolete by even a small change in the surrounding environment. Today’s intelligent machines, on the other hand, can discover by themselves how to best attain prespecified goals, and can revise their approach in light of changing circumstances. This, in addition to the availability of vast troves of data, is why they can outperform their creators — a prospect both thrilling and alarming.
Yet despite these differences, the comparison between bureaucracy and artificial intelligence can help us chart promising directions for future inquiry. This is so especially in the humanities and social sciences, since for decades bureaucracy absorbed the attention of political scientists, sociologists, and anthropologists, and the imagination of writers, artists, and filmmakers. With the benefit of hindsight, what lessons can we learn from this accumulated body of work?
The first lesson, I think, is that it is worth studying our anxieties, even if the prospects we find most unnerving, such as the singularity, seem distant or unlikely — indeed even if they could notmaterialize. As an object of concern, bureaucracy acquired a life quite independent from how bureaucratic organizations actually functioned. It became a screen onto which the anxieties of the age were projected.
The same could be said of artificial intelligence. Dystopian visions are a valuable source of self-knowledge: They help us understand the changing faces of alienation, domination, powerlessness, and exploitation. These forms of oppression, it bears emphasizing, could owe more to the workings of our economy and culture than to technology. Bureaucracy and artificial intelligence are the object of our anxieties, not necessarily their cause.
Second, the study of bureaucracy stands as a reminder not to dismiss human agency too rapidly. For all the fears it evoked about depersonalized rule, bureaucracy never eliminated human judgment. The rule of nobody was always at least in part the rule of somebody.
The same might be true for AI. This means that we need to investigate how individual and organizational actors mediate the adoption of new technologies, and how they are in turn transformed by them. This calls for empirical social science. From it, we may emerge with a richer understanding of the moral and political dilemmas that AI occasions.
Whenever societies undergo deep transformations, envisioning a future that is both hopeful and inclusive is a task that requires moral imagination, the cultivation of empathy, and the activation of moral solidarity. It is a task that also requires vigilance against false promises, which advance the interests of some in the name of all, and against false necessities like technological determinism, which make it seem as if we have no choice when we actually do.
Have some communities and organizations faced up to these challenges more lucidly than others? What is it that accounts for their success? And what does success even mean? These questions too — not just whether to turn a car right or left in the event of a collision — will be fertile grounds for ethical reflection in the years to come.
Read the story from the MIT School of Humanities, Arts, and Social Sciences.