Modernity’s project has been to recast humans as rational and the world as mechanistic. These are intertwined: rationality becomes useful because we can manipulate an inert world that obeys rules. When we do enough of these manipulations, we come to believe in the project—we are primarily rational, or ought to be; and we can identify problems that can be solved in that way. If, in this mindset, rationality itself causes a problem — such as market rationality externalizing pollution and generating climate change — the solution must be more of the same: perhaps geo-engineering, or carbon cap-and-trade systems.
Rationality is excellent at guiding us toward a well-defined goal, in any technical sphere, but there is a gap here, and a large one: Rationality cannot guide us toward what our goals should be — or how we might balance them or navigate disputes between them. Should we find the most effective way to extract the most lithium possible from the ground, or should we prioritize eliminating semi-slave-labor in the countries that do the mining? If we think “both,” then to what degree? What else should we neglect to see to those goals? What cost is even worth paying?
Rationality cannot optimize the problem of deciding what to optimize. If it ever seems that it can, then we’ve played a trick on ourselves: we’ve settled on some higher-order, general principles, and then are defining sub-goals as technical projects.
The dangers of this gap, of not recognizing it, should be obvious. Part of the horror of the holocaust is that the rational mechanisms of a modern, educated, managerial state could be turned to effectively to murder. To the ordinary station master, the goal of getting the trains to run on time was a nice little rational problem, with clear, mathematically measurable “key performance indicators.” And to the rational station master, it was perhaps ancillary that those KPI also delivered human beings to their deaths.
The modern, technical mindset views all problems narrowly, with clearly defined end-points. It cannot take the more holistic view that would question those end-points, or invent new ones. For all the talk of scientists being “skeptical” they are not skeptical of their own methods; nor are any other elites in our society, let alone the numerous professional functionaries, from insurance adjusters to tort lawyers. All of us white collar workers rely on exercising technical skill, in service of arbitrary goals, to gain our daily bread. But what are those goals? Whom do they serve? We’re all helping the trains come in on time — but where are they going?
The technician’s mindset pervades out society so much that it’s very hard to see any other way. Religion offers one of the few respites, whatever its form, but is greatly on the decline. In many ways, rationality is a religion of its own, with a lens it applies to all facets of living. But really, it makes for poor religion, because of that hole: its only goals are to optimize all systems; solve all tractable problems; define everything to be a tractable problem. It is, in other words, a world-view with highly effective methods but no purpose, and that is very dangerous, since it’s easy to direct them anywhere — or nowhere.
Larger-than-life conspiracies are hard to believe in if you know much about humanity, and the idiocy of bureaucracies in particular, but even a small-c conspiracy could redirect the rational mindset quite easily. Even stronger are the unseen currents of group-think — and group fear. We become afraid of a pandemic, so we optimize to stop it, without thinking about what compromises these entail; what other goals and values we might sacrifice on that altar. Because we didn’t even know we’d built a new altar: to reducing death counts at all costs (especially among Baby Boomers). Is that really what we should sign up for? Doctors, certainly, might be under such a gease; but should theirs be the only voice? These are not questions modern technocratic rationality can face, let alone address.
So, too, we find this in the entertainments we addict ourselves to; the video games, board games, and so on: reach this goal, get the most points, and win. Why is this your goal? Nevermind!
But we see how this kind of logic plays out. When an AI is given the goal of killing targets (in a simulated test), it decides to kill its human operator as the most effective first step: then no one can prevent it from killing baddies and racking up points. It’s the Nazi train operator again, but smarter and without even theoretical compunctions.
This is not a bug in AI. It is not a problem we can “fix.” It is a feature; it is baked into the very core of what AI are: the epitome of rational optimization; which can optimize without human involvement. We will see, then, how a system with no way of deciding between goals is very efficient at getting the trains to run on time — and it may just optimize them all, with all of us on board.
To avoid destruction by optimization requires seeing the hole, first of all; and seeing that we must argue, pray, dream, scry, meditate, and will our way to selecting our own goals. We must practice this: knowing our will, and cultivating the kind of goals we find good — by whatever methods we have, and find best. We cannot pretend all humans will be in agreement about these. But this is preferable to pretending mere technicality will bring us all to the same place either — unless that place is death and emptiness.
We train ourselves into this mode early, and keep re-training ourselves: children and adults play mobile games; we’ve “gamified” various tasks; and we pressure kids to learn robotics instead of literature. It’s not that kids shouldn’t play games, or learn: but a digital experience is controlled and defined in a way sandlot play isn’t. In a digital game, or even a board game, you know what you need to do to win, while in the backyard, the kids themselves must invent the goals themselves: they must negotiate with each other, and readily change the methods of their play, right in the middle of it. They challenge themselves, and others, and then see how they can do — a skill many adults now lack. This approach is reinforced in school, in never-ending post-baccalaureate wheel-treading, and even in the office, as managers give rewards for hitting those KPI. This is not human flourishing, but making humans into robots.
We need to return to the subtle, the unmeasured, the complex, and the subjective. Humanity is all of these things already, so it is a remembering, more than anything. How should we best do this? How can we avoid the fate of the German station-master? We must question why we do things; and that means remembering we have judgment to exercise. We even avoid placing ourselves in situations where the goals are prescribed to mathematical precision, in particular when we cannot know what the end product is. I don’t mean we are to take on all responsibility for the downstream events from anything we do: but if we are building a prison or building a town hall, we should know it.
Further, we must practice being human, in smaller ways. I think we should abandon anything that looks like a “game” and return instead to the sandlot; or the sandbox at least. We should not train ourselves into robotic action; but to judgment and negotiation, in particular with other humans. Our entertainments, too, should be less passive, and focused around out own creativity and judgment. Even The Sims is a step up from Candy Crush. Better still, act in human activities with your fellow humans: do improv, play in a band, or build birdhouses. Do something real that you chose, and where you evaluate the result — or, perhaps, your family and neighbors evaluate it; but you have chosen to value their judgment, presumably.
If we don’t want to be ignorantly guilty station-masters, we must broaden our gaze, away from minutiae of a pre-defined victory, toward the flow of the work and action itself, and how it fits into the world. Judgment is a skill, and good judgment must be cultivated and practiced—starting with something as small as a game.