Failure modes of determinist goggles

[Content note: more free will / responsibility stuff, because apparently I never get sick of it.  Contains some qualifications on points I made last time which hopefully won’t be taken as defenses of what I criticized before.]

Now it’s the other side’s turn.

My last post was a diatribe of sorts against the societal values that I’m afraid free-will-leaning outlook on life will lead to (see “Political ideology and perception of free will” for an explanation of what I mean by “free-will-leaning” and “determinism-leaning”).  But I don’t consider an unchecked determinism-leaning outlook to be a good alternative.  In my opinion, it leads to a situation which might be equally dangerous in its own way, and which is the focus of today’s post.

I. What do determinist goggles do, exactly?

Parallel to their counterpart discussed in the last post, “determinist goggles”* are a metaphor I made up to describe a certain way of viewing the world — this time, involving a tendency to see conscious actions as determined by external factors.  While a human who makes a decision may still be nominally considered as somehow a responsible actor by someone wearing determinist goggles, all credit or blame will tend to be focused away from them and towards circumstances outside of their control.

According to the determinist-goggler, human society is composed of individuals whose capacities for accomplishing things are in very large part dictated by their genetic conditions, their upbringing, temporary conditions that they’re subject to (e.g. sickness, treatment by others in their lives), and large-scale societal forces which buffet them to and fro as they struggle along the winding road of life.  Freedom on some level exists but mainly belongs to those for whom such factors are less unfairly restrictive; maybe they should be held at least somewhat accountable for their occasionally destructive choices.  The rest of the population should be treated with sympathy for having a difficult road to follow; their mistakes can mostly be explained in terms of unfortunate factors they have no choice over, and those who do manage to triumph in spite of their circumstances should be applauded.  However, most successes and failures are brought about by good and bad circumstances respectively, and neither the winners nor the losers should receive judgment that they don’t really earn.

In the determinist-goggles-colored world, most of us have a rather hard time succeeding at various things through no fault of our own; yet, we are constantly getting blamed for our failures by others who refuse to see everything that’s tying us down.  The refusal of others to recognize the struggles we have had to go through is easily explained by the fact that they are more fortunate and have probably never had to face those particular struggles (either that or they have experienced our struggles first-hand but are too self-centered to be able to empathize when they apply to other people).  How convenient for them, what with their advantage of never having to actually understand anything about our handicaps, that they can so easily close dismiss our difficulties in order to give themselves credit for their relative success.

There’s one observation I want to get out of the way first.  It is surely the case that across cultures and throughout history, a number of ideologies or key components of ideologies have been inspired by the determinism-leaning mentality; Marxism comes to mind, for instance.  During my lifetime, a newer kind of determinist-goggles-ism has appeared to strengthen considerably and has materialized into an ideology which focuses on combating social inequalities brought about by privileges on various axes, nowadays often referred to as “Social Justice”.  This is nothing more or less than an independently living and breathing manifestation of the view from determinist goggles which has been fleshed out into a full-blown social and political belief system.

However, today I don’t intend to hone in on directly discussing the flaws in today’s Social Justice movement and in how its objectives are argued.  That is a target that’s already getting beaten to death in this part of the internet in particular and seems to be part of a wider culture war online (and the real world) in general.  I’d prefer to hold my focus on the purely determinist-goggles-colored view and the difficulties that arise from it, rather than getting bogged down in commenting on some vast body of cultural rhetoric which is being used to treat a wide array of concrete social ills.  I can’t promise that I’ve entirely succeeded at this, but I have tried to address the determinist-flavored mentality at the individual level as I did for its opposite, and I’ll leave it to the reader to connect the dots between the arguments I make and their applications to the current culture-wide discourse.

*It has been suggested to me that I replace the words “libertarian” and “determinist” in my “goggles” terminology with the expressions “high-agency” and “low-agency”.  This strikes me as most likely an improvement, but just for continuity’s sake, I’d rather make this post more consistent with the last one and save the change for next time.

II. Limitations on respect for limitations

As with the polar opposite pair of goggles which I treated in the last post, someone who puts their faith in determinist goggles will suffer from considering only one point of view while ignoring any and all hints of the other.  This is again the most direct and obvious (even tautological) fault to find with wearing one pair goggles of any type and never taking them off.  The committed determinist-goggler will fail to allow the possibility that they or the person they are sympathizing with can possibly change how they act, react, or feel about things that are happening to them.  This is problematic already, but it lends itself almost inevitably to further fallacies.

To understand the trap that one is vulnerable to falling into, I think we have to go back to the philosophical definition of determinism, even as I continue to stress that in practice one’s choice of goggles probably isn’t all that correlated to one’s choice of metaphysical beliefs. The view from the determinist goggles is still rooted in abstract determinism.  I’m going to say we might as well assume hard determinism rather than soft determinism here, because we’re talking about a model which renders virtually null the power to make free choices in any meaningful sense.

What is hard determinism?  When you get down to it, it’s the belief that everything, in particular every human action, is completely determined by prior events and therefore cannot happen differently (for some intuitive and meaningful definition of “cannot”).  So nobody — neither you and your friends, nor the antagonists in your narrative — can actually help what they’re doing.

The main criticism of this comes from the apparent implication that nobody can be held morally accountable for anything.  This is a problem because it seems to contradict the very foundations of most systems of ethics, but also because in practicality it utterly and profoundly goes against the way we human beings actually process what is happening to us.  When we like something we see in the world, we naturally want to praise those whose actions have put it that way; when we are upset with something we see in the world, we instead want to yell at those people and hold them accountable for fixing the problem (“those people” may occasionally includes ourselves, when we recognize our potential to change the world for the better).  Without being able to do this, we are stuck in a rather nihilistic reality where we essentially have no real purpose, lacking the capacity to directly enact change or to shame someone else into enacting change.  Nobody actually lives this way, regardless of the conclusions they may arrive at by doing philosophy.

Now go back to the determinist goggles, which don’t exactly make one a hard determinist but have an effect which is roughly similar.  In principle, the goggles should cause the wearer to go easy not only on themselves for their failures but also on their oppressors for their failures.  After all, everyone (not just ourselves or the people we’re sympathizing with at the moment) acts according to what circumstances dictate, right?

But in practice, nobody knows how to function according to this assumption.  So instead, determinist-gogglers tend to follow a slightly adjusted premise that some circumstances (conveniently, the ones they and their friends find themselves under) dictate negative behaviors while other circumstances (ones which apply to their adversaries), well, they’re not really a valid excuse for anything.

One can see this play out in local settings where attempts to enforce a determinist-goggles-ism which applies equally to all parties leads to an unsustainable system of social rules.  I’ve certainly witnessed such strife firsthand.  One person in a social unit explains that they can’t stand Behavior Y, that unfortunately they have an “emotional need” for Y not to be done in their general direction, so it’s imperative to adhere to a rule of not doing Y.  That’s all fine and good until someone else who interacts with them expresses their own “emotional need” to do Y, stating that it’s impossible for them to cope without doing Y.  Maybe everything can be worked out serenely within a group where at most one person wears determinist goggles.  But I imagine most groups probably have more than one determinist-goggler in them, and then it quickly becomes infeasible to come up with social contracts agreeable to everyone without first fighting a war over whose “needs” are actually valid.

I’d like to point out another scenario in which this conundrum shows up, perhaps slightly in disguise.  Determinist-goggles-ism tends to imply that one should always lend a helping hand to the less fortunate, because their misfortune is probably not their own fault.  Yet curiously, I’ve noticed that a lot of my acquaintances, despite generally appearing to have determinist goggles on, in certain contexts manage to avoid doing this.  For instance, I almost never witness anyone give money to homeless people on the street, not even the least judgmental and most empathetic people I know.  (Lest I sound holier-than-thou here, I admit to walking past beggars on a daily basis and rarely stopping to give them my spare change; I do feel bad about this despite the excuses I come up with.)  As to how they justify this, the most likely explanation seems to me that they assure themselves that their own difficulties (specifically financial ones) effectively prevent them from “being able to” lend a hand to people in a blatantly more desperate situation, or at least that one should look towards others of greater means to perform such acts of generosity.  And I’m not claiming this rationalization is entirely wrong.  But it looks to me like a lot of determinist-gogglers have succeeded in maneuvering towards a model in which their difficulties somehow kinda-sorta trump others’ (obviously worse) difficulties.

In fact, this even applies to the personal story I told last time about choosing to take a particular city bus without a ticket during one year, which I used as an example of something morally questionable that I tried to justify to myself through libertarian-free-will-ist thinking.  After I wrote about that, I got to remembering that I also had a second, quite different, rationale for this cheating behavior.  Recent months had not been particularly kind to me, and I felt that I’d been having a rough time.  I was, for the first time in my life, trying to establish myself in a new country, and it hadn’t exactly been smooth sailing.  Not only had I found myself majorly inconvenienced by countless hours of wasted time trying to navigate an unfamiliar system in a new language; I had recently wound up paying hundreds of Euros to the government unnecessarily.  Sure, I was still doing fine financially despite that, but I told myself that after all I’d been through, I deserved to get a small break just this once, even if that required a bit of cheating.  Now at first glance, under determinist-goggles-ism (certainly hard determinism) the concept of “desert” is questionable if not utterly meaningless.  But I think I had fallen prey to the temptation to privilege my own unfortunate circumstances over any difficulties I might create for others, in a similar fashion to what I described above in the begging example.

To sum up the point of this section, any attempt at modeling people’s behavior in terms of circumstances and “needs” in a pure and even-handed way is practically guaranteed to break down and devolve into a contest over which kinds of excuses are most valid.

III. Empower failures

In my view, the most crucial failure mode of the determinist goggles can be summarized in one word: disempowerment.

southpark_bloodymary.png                        from South Park’s season 9 episode “Bloody Mary” (warning: the episode’s content takes its title far too literally)


Everybody wants to feel empowered (at least in the front of their minds, most of the time).  That goes for wearers of determinist goggles as well.  So determinist-gogglers tend to go through all kinds of mental gymnastics (e.g. semantic manipulation like replacing the term “victim” with “survivor”) in order to emphasize what little control people have over their situations at the same time as granting them a sense of empowerment.  I contend that this is no more than a futile effort to have one’s cake and eat it too.  If one is going to ascribe to a belief system where someone’s action or state is determined by external forces, then an obvious immediate consequence is that someone has less power over their situation.  There’s no getting around it.

I don’t think there’s any need for me to delve too much into why powerlessness is bad.  Clearly it leads to hopelessness in bettering one’s situation as well as a lack of credit for (or ability to be inspired by) those who have (because the improvement in their situation was “just luck”).  Also — and I think this point is underacknowledged, but I won’t dwell on it today — it opens one up to deliberate bullying, oftentimes by extreme free-will-gogglers.

I sometimes ponder how wimpy so many of us would seem to the multitudes of all everyone who existed through the entirety of human history up to very recently (not to mention citizens of many developing countries today).  This may seem like a bit of a deviation from the main thrust above, but bear with me for a few moments and I hope the connection will become clear.  Throughout most of history, almost nobody was as privileged or financially comfortable as most citizens of developed countries are today.  Throughout most of history, many men had to do backbreaking labor for little pay; many women died in childbirth and most of those who didn’t still went through the process in incredible pain; disease was rampant; there was no “health insurance” as we know it and whatever healthcare was available was barbaric and terrifying by our standards and often did nothing to cure the recipient’s ailments anyway; although humankind was free of the hazards of modern technology, both institutions and human interactions were a lot less regulated and most people probably had far more reason to feel unsafe in their day-to-day lives; it was probably pretty common to remain for one’s lifetime within a radius of some 20 miles; and so on.  My presently-existing self for one would probably be pretty traumatized by some of what the average person had to go through even only a few centuries ago.

Today, most of us can’t imagine having to live the way our ancestors did.  Because of advances in science, medicine, law, and so on, we are able to enjoy an enormously higher quality of life, and accordingly, our tolerance for many things has become considerably weaker.  On the whole, this is something to be celebrated.  The kind of progress humankind has made and continues to make in preventing so much suffering and raising our standards and expectations is absolutely the noblest goal we as a species can strive for.  But every major step forward comes at some cost, and undoubtedly one such cost has been a collective decrease in hardiness and fortitude against what our still-chaotic world might throw at us.

Therefore, wherever we’re engaging in the fight for progress, as important as it is to highlight and foster a culture of sensitivity for the plights of those whose lives we want to improve, we should do so with an eye towards also empowering those individuals by allowing the possibility that they may yet pull themselves through adversity and come out stronger.  Ideally nobody should have to make the most out of unfair circumstances, but “ought” and “is” are two different concepts, and lack of fairness doesn’t imply lack of agency: the fact that someone is not to blame for their situation doesn’t necessarily mean that there isn’t something they could (and therefore should) do to better it, or at least to learn how to cope with it as long as it remains to be resolved.

This consideration helps me to understand where Richard Dawkins was coming from in his infamous “Dear Muslima” “open letter”, which touched off the internet war known as Elevatorgate — if you were anywhere near the atheist web around 2011 you may have heard of it.  Not that this puts me anywhere close to agreeing with it, mind you: Dr. Dawkins’ nastily sarcastic overreaction (and subsequent defenses) to what seems to me a mild and mostly reasonable request made by a feminist atheist was unjustified on multiple grounds.  (I’m not going to go easy on a man who holds himself up as a paragon of levelheadedness and rationality at every turn.)  In particular, Dawkins failed to acknowledge the validity of complaining about one particular problem given much worse problems in other places or during other historical periods.  He seemed not to recognize the fact that working hard to improve adverse conditions everywhere — not just at the place and time where they are worst — is nothing less than the face of progress itself and a part of what it means to have high standards.

Yet at the same time, I think I do understand a bit of the frustration behind Dr. Dawkins’ snarky words.  When progress has brought us to a point that our difficulties are tiny in comparison to the problems that were commonplace in recent memory or are rampant in other parts of the world, we should make at least some effort to respect that when we talk about them, to have perspective, to instill a conviction in our audience that although something is unacceptable, we will still be able to deal with it.  After all, look at what so many others have managed to endure.  We’re allowed and even encouraged to be upset about this thing, but talking about it like it’s The Worst Thing In The World when it’s clearly not might start to do more harm than good.  If that was Dawkins’ frustration, then I think he took it out wrongly on the particular activist he was attacking, but I do recognize where it may have been coming from.

On a more personal note, I have been extremely fortunate in the three decades of my life so far in just how little I have had to experience physical pain.  Nearly everyone I know by my age has had to go through a very unpleasant accident or a difficult recovery from some medical procedure, to say nothing of the agony that just about everyone suffered at some time or another in the days before modern medicine.  But I can’t help by worry that the statistical likelihood of facing severe pain bound to catch up with me one day and that due to my lack of experience I might not handle it well.  I remember reading the story of a guy on Reddit whose traumatic accident led to the most painful scene imaginable (which I haven’t the slightest desire to elaborate on) who made a side-comment about how sometimes it was nice to be able to remind himself that no painful event for the rest of his life would be anywhere near as agonizing as what he’d already gone through.  In a weird way, I almost envy this kind of security.

Now imagine a mildly futuristic world in which some humanistic organization is pushing for the creation and distribution of bodily implants which immediately ease even the most minor twinges of pain.  Disregarding possible risks that come from the weakening of our bodies’ natural alarm systems, this would seem like a marvelous step forward from the humanistic point of view: what could be a more worthwhile goal than to lessen suffering?  Now as far as the proponents’ rhetoric goes, the most persuasive flavor will probably strongly emphasize how bad even the most minor physical pain is.  In fact, such conviction will probably be sincere on the part of the most passionate members.  I have a feeling that in this hypothetical world, after enough exposure to such rhetoric, I’d likely grow even more intolerant of pain than I likely already am, perhaps extremely enough to reach the point that maybe even stubbing my toe would seem like a minorly traumatic event.  And I’m afraid that in the event of the pain-reduction-implants initiative falling through, or even if it passes but there are flaws in the implementation (as there always are), I would wind up in a rather weaker position when it comes to dealing with pain than real-me is today.

In my hypothetical sci-fi scenario above, of course it’s still a great idea to go ahead with the pain-reduction technology, and maybe even the most extreme “stubbing one’s toe is unbearable!” rhetoric is worth the potential downside I suggested.  But it should be done in a calculated way that doesn’t disregard that potential downside, and I have a feeling that determinist goggles are pretty likely to blind the wearer to this consideration.

This discussion may appear to have strayed far from my initial characterization of the determinist goggles, but the theoretical connection and empirical correlation are clear as day to me: the more one focuses on external circumstances, the more one relies on external changes to maximize one’s state of well-being.  And while such an attitude has probably been the crucial force behind much of human progress which has succeeded in improving well-being all over the world, it can also have the effect of disempowerment on the individual level.  And it might be good to keep that in mind.

IV. Lack of might makes right

I’ve seen another common abuse of the determinist goggles which alarms me even more, though.  To motivate my perception of it, I’ll start by recalling something I read online a very long time ago — I must have been in high school still — on some right-wing site.  I don’t actually remember where this was or the exact wording, but it was a phrase about liberals that went something like

left-wing principles, where people are judged more according to their grievances than according to their deeds.

This was long before I started framing everything in terms of deterministic vs. free-will-libertarian positions, and I was (and still am) fairly liberal myself, but this snide throwaway line stuck with me and gave me serious food for thought.  I don’t think it’s a fair branding of liberalism in principle, but over the years I have come to suspect that the American Left is being guided by a giant pair of determinism goggles, and the “judge people according to their grievances” mentality does seem like an easy trap to slip into if one gets overly dependent on them.

The pure, idealized version of a moderately determinist-leaning viewpoint is the assumption that there are hidden external forces behind people’s behavior, so we should refrain from giving them all the credit for doing well and go easy on them when they do badly.  To the extent that it makes sense in the first place to assign praise and blame to people for their actions, possible causes outside of their control should always be entered into the equation.  In particular, if we see someone doing poorly at life, we should cut them some slack and lean towards believing that they really are putting in a laudable amount of effort despite the fact that on the outside they look like they’re doing poorly.  It only takes a short leap of logic to go from this to the belief that someone is laudable because on the outside they look like they’re doing poorly.  (Or conversely, the belief that someone is automatically deplorable because on the outside they look like they’re doing really well for themselves.)

And I do see evidence in many people’s rhetoric of this logical leap taking place subconsciously.  This worries me particularly in the case of someone openly exposing their own tendencies towards unproductive behavior or general difficulties in coping (e.g. severe anger) in a boastful tone, as though it’s somehow a virtuous trait in itself rather than an understandable reaction to something tough they’re going through.  In some extreme cases, this declaration seems to be performative rather than truthful, a thinly-veiled form of bragging and vying for status.

I hesitate to make this point, because I’m afraid it could be easy to misunderstand and get taken very badly, but I see a crucial difference between recognizing that someone is virtuous in spite of their weakened state and deciding that they are virtuous because of it.  It’s the difference between being able to receive empathy and understanding for one’s failings and being compelled to cling onto a righteous indifference towards overcoming them.

I can’t think of any popular quote which annoys me more profoundly than this one.  I blame it on determinist goggles.

Determinist goggles may seem like a much more enlightened and progressive alternative to their libertarian-free-will counterparts, instilling empathy and compassion in those who glimpse through them.  The world through determinist goggles appears at first glance to be one where everyone is just doing whatever they can just to muddle through and should be understood rather than morally judged for the situations they wind up in.  But on viewing reality this way long enough, one learns to lose hope for actual remedies to the variety of problems being faced by humankind.  It is a world where the only practical way to live is to assume that some people do have genuine agency and that the rest are powerless to do anything other than wring their hands and sit around waiting for those free agents to act.  Taken to an extreme, this reality will eventually devolve into a society where the weak are assumed to hold innate moral superiority over the strong even while the very categories of “weak” and “strong” can only be defined relative to each member’s point of view, and such a society cannot hope to function.


Failure modes of libertarian* goggles

*metaphysical (not political)

[Content note: more musings on cause and effect, free will, and moral responsibility, touching on the rationale behind sins ranging from general ableism and classism to greedy cookie-grabbing.]

Now it’s time for me to delve deeper into what I have called before the “libertarian-free-will mindset” and the “deterministic mindset”.  In this post, I explained what I mean by these competing concepts and emphasized that I see them as a major component of almost every disagreement and debate.  At the time I was using the clumsy terms “free-will-leaning” and “determinism-leaning” in referring to them, but more recently I’ve come up with “libertarian goggles” (with the understanding that we are using “libertarian” in the metaphysical rather than the political sense, though political libertarians are probably gazing through these a lot of the time too) and “determinist goggles”.  I think this is less awkward… or maybe it’s more awkward, but for now I’m going to stick with them anyway.  The word “goggles” in each case is intended to stress that I’m gesturing towards a way that people tend to perceive things: again, all of this has very little to do with anyone’s abstractly-held philosophical position (which they may not have developed anyway), but the assumptions they tend to make when sizing up a situation involving human behavior.  Someone can study philosophy and decide that the arguments for incompatibilist free will are the most valid, while in their day-to-day lives they tend to excuse people’s behavior as the function of their background and environment; someone else can behave likewise without having ever given a single thought to the academic philosophical issue of free will in the first place.

Today I want to explore the flaws in the view through libertarian goggles and exactly how it may affect a person’s judgment or the rationalizations they construct for it (I intend to get to my issues with determinist goggles in the next post).  This may sound like the premise for another dispassionate essay written in a cerebral, sitting-back-in-my-armchair-musing voice.  Actually, this essay feels to me like more of a personal rant than it might seem on the surface from my tone.  I’ve known people who rarely seem to take their libertarian goggles off, and they frustrate me.  In several cases, I feel that I’ve been directly victimized by them via this mode of thinking.  In my experience, they are prone to several logical fallacies which might not necessarily from the free-will libertarian premise but which I am going to speculate are at least strongly correlated with a tendency to view the world through nondeterministic-free-will-ish assumptions.  I don’t promise any kind of clinching argument to show that this is the case; I’m mainly just going to describe my observations of such a correlation.

Before I begin, I should of course make the obvious disclaimer that I realize humans are complex and it would be foolish to imply that they fit neatly into the categories of “libertarian gogglers” and “determinist gogglers”.  But I’ve known a number of people who appear pretty far in one direction or the other even while this doesn’t apply to anywhere near everyone, just as I’ve known a number of people whose political views align clearly with the right or the left while at the same time a lot of other people are centrist.

I. What do libertarian goggles do, exactly?

The “libertarian goggles” I speak of are a metaphor meant to describe a certain way of viewing the world.  Libertarian goggles cause the wearer to see most conscious actions as completely free choices which bring with them moral responsibility.

The world viewed through libertarian goggles looks like a bunch of people choosing to do bad things and then placing the blame for their behavior on the folks around them, or their genetics (in particular, sickness or disability), or Society, or The Government, or This Bad Economy, instead of on themselves for not trying harder.  Nobody is bound by the conditions they find themselves in; therefore, to pretend that they are amounts to excuse-making.  It’s all a matter of attitude, plain and simple.  And the allure of being able to explain their own failures away by putting them on other people or on supposedly extenuating circumstances is strong enough that it blinds them to all of the agency they really have.  This is the main reason why so many others (who lack libertarian goggles and the wisdom they bring) wind up learning to explain so many things with a deterministic model.  These people would be so much better off, so much more able in their own lives and in not damaging the lives of others, if only they would see this, because understanding one’s own freedom empowers one to overcome any difficulty.

People failing at things don’t make up the whole world, of course.  There are plenty of people who do really well, and they manage this mainly through pushing themselves and not falling into the rut of blaming the rest of the world for their difficulties.  Where they start out and just how much effort they have to exert to pull themselves up is essentially irrelevant when it comes to bestowing praise: someone from a less fortunate background has just as much ability to move upwards as anyone else, and the fact that they start out in a lower position only means more reason that they ought to work hard to improve their situation.


It follows from what we see through the libertarian goggles that doling out pep talks rather than sympathy to those who are struggling seems like a reasonable way to go.  And if someone asks for our help when they clearly could be pushing harder to better themselves instead of trying to leach off of other people, then naturally we will be reluctant to help them if in doing so we slow ourselves down on our own journey upwards (which tends to be a risk of sacrificing for someone else).  In fact, helping them would actually be doing them a disservice, more often than not, because it would instill a reliance on other people for support rather than on their own willpower.  Surely it’s much more virtuous to stand back and let those who are in trouble hit bottom so that they have no choice but to learn how to bounce back from adversity on their own steam.  Tough love, and all that.

Needless to say, I’ve seen that last idea held up as the rationale for any and all forms of bullying.  A society which places faith in libertarian goggles can be a pretty harsh place.  But might this framework still lead to a coherent model, however harsh, of the world in which we live?

Well, it’s clear enough to me that the model is fundamentally flawed and does not lead to a full picture of the territory.  The sort-of-tautological criticism that can be made against the libertarian-leaning view which stands out immediately in the context of other things I’ve written here is that the libertarian goggles act as a shield against incoming gadflies whispering deterministic heresy.  That guy over there seems to be acting irresponsible for no apparent rational motive and claims that he’s trying to stop acting that way but has trouble controlling it for some reason.  His behavior doesn’t match the symptoms of any of the three or four disorders that we acknowledge exist.  Without libertarian goggles we might be receptive to at least the vague suggestion that there’s something wrong with the guy — something deep in his wiring and therefore not entirely under his free control — which doesn’t have a name yet (or that his behavior might be influenced by his upbringing or how society treats one or more demographic categories that he fits into etc., all aspects of his life we’re not in a position to understand right now).  The small handful of very well-known conditions which we do recognize weren’t always known; they had to be discovered through the study of medicine.  There are undoubtedly many other such conditions which still haven’t been discovered.  But once we’re wearing the libertarian goggles, we’re no longer open to considering such a notion; we’re sticking with our very conservative List of Known Conditions and it doesn’t occur to us to seek any explanation from outside of it.  We will instead throw up our hands and invoke a sort of mysterious Agency of the Gaps to explain away his behavior… which is just another way of saying that we’ll conclude that it’s his own damn fault, because somehow he chooses to have a weaker will or less moral fiber than we have.

This is obviously not the right approach for investigating empirical reality, for a number of reasons including those outlined in my post about “gadfly speculations”.  (The analogous criticism is equally valid, of course, for the determinist-goggles mindset, but I’m waiting until another day to pick on that.)  However, my issue with the libertarian-goggles approach extends beyond its inherently fallacious nature to the fact that in my experience it brings with it a bundle of other wrong-headed attitudes, which I attempt to describe below.

II. The strong link in the chain

I wanted to separate under several headings my criticisms of what I consider to be harmful ways of thinking that come from wearing the libertarian goggles for too long, but I couldn’t help but realize that they’re all just different ways of looking at the same core idea.

I remember once hearing a little parable that had been passed among the philosophy students at my university, which went like this.  In some fictional philosophy department (at least, obviously not ours), it is arranged for there to be a large plate of cookies laid out each day as free food for the poor undergrads and grad students in the department.  (Note that there are always many more students than cookies, but I suppose the point is that there should always be some food resource during the day, however small, for the most exhausted students to dip into to keep their sugar levels up.)  But many of the faculty members have a habit of grabbing cookies for themselves, even though they are obviously meant for the students.  One day there are ten cookies on the plate, and ten professors each take a cookie.  In this way, the cookies disappear one by one, so that by the late morning they are all gone and none of the philosophy majors who wanted a little sugar rush have had the opportunity to get any.  The students are understandably annoyed and looking for someone to blame.  The thing is, it wouldn’t have bothered anyone if only two or three of the cookies had disappeared.  It wouldn’t even have been such a problem for nine out of the ten cookies to be gone: the state of greater-than-zero cookie material being available for snacking emergencies is all that mattered.  So it’s tempting to blame the professor who took the last cookie.  But somehow it doesn’t seem fair to put all the blame on her.  The philosophy students spend so much of their energy arguing about it that soon they are all in dire need of a pick-me-up, which is a problem because meanwhile there are still no cookies and they’re no closer to agreeing on who should be held accountable.

I hypothesize (with all the weight of my authority as an armchair social psychologist) that the philosophy majors who are libertarian-gogglers are distinctly more likely than the rest to put the default blame on the professor who took the last cookie.  I call this “privileging the final link in the causal chain”.  In our cookie example, the causal tree looks pretty simple, with all of the choices leading up to the final result being independent.  It can be drawn as a directed graph looking something like this (time flows left to right; the yellow node represents the “no cookie material left in plate” state).


In our real world, most causal trees look more like this, with many choices influencing other choices.


At each node (except yellow one on the right, which represents a final state), some agent made a decision, for better or for worse.  In my opinion there’s actually a pretty straightforward way to quantify the rightness or wrongness of each of these decisions.  But that doesn’t tell us anything about which decision-maker ultimately deserves praise or blame for the final state, because as I’ve argued before, agency does not imply moral responsibility.  It follows that it is dubious to assume that the main responsibility lies with, say, one of the agents at the nodes directly pointing to the rightmost one above, just because those decisions happened after the other ones which influenced them.

But I find myself in disagreements with certain people — and from my observations those people are the kind that are wearing libertarian goggles a lot of the time — who seem to assume without questioning that the responsibility lies entirely with whomever made the most recent decision (indicated by red nodes in the pictures above).  And I’ve found it hard to convince them otherwise.

This particular fallacy takes on many variants.  One of them is putting primary responsibility on someone (agent X) for choosing one out of the only two actions A and B allowed to them by someone else (agent Y).  Agent Y may have foisted upon X a choice between two equally unethical actions A and B, and it seems clear enough to me that Y deserves somewhat more of the blame than X does for whichever one of them X chooses.  But I remember once discussing such a philosophical thought experiment with a group of my colleagues over lunch, where Y is somehow forcing X to choose between killing two people, and I was surprised at how many of them thought it obvious that X was primarily guilty for whichever murder results, having been the one to actually pull the trigger.

Another variant, fairly common in political discourse I think, is the notion that it’s fair to judge agents equally for performing the same action with the same result, even if these choices were influenced by very different sets of circumstances.  This disagreement can take place even between people who agree that both agents should be judged negatively but differ on what should be the appropriate magnitude of punishment for each of them.  A typical dialogue between a Libertarian-Goggler (abbreviated LG) and a Devil’s Advocate (abbreviated DA, whom I consider the hero in this scenario for playing the gadfly role) might look something like this.

LG: I propose that the law against committing A should be applied equally to citizens X and Y, since they both did it to the same negative effect.

DA: But is that fair?

LG: What do you mean, “fair”?

DA: Citizen X was essentially manipulated into doing A, while citizen Y made a very conscious choice to do it.  They both need to be penalized, but surely it’s unfair to treat X as harshly as we treat Y, given X’s extenuating circumstances.

LG: It’s perfectly fair.  Both chose to do A when they could have made a different decision, do you deny that?

DA: No, but consider the fact that citizen X didn’t hadn’t been properly exposed to all the facts surrounding the ramifications of doing A, was in a more desperate situation that made A harder to avoid, and had less beneficial alternate choices at his disposal than citizen Y did.

LG: Could Mr. X have made himself less ignorant if he’d tried hard enough to learn the relevant information?

DA: Well, technically yes…

LG: And could Mr. X still have overcome temptation to do the right thing, which would still mean not doing A, even if the other choices were less than ideal?

DA: Yes…

LG: Then both should be held equally responsible for what they did.

DA: Come off it!  You know the situations aren’t equal, so it’s unfair!

LG: Were their choice-making capabilities not equal?  Either you have free choice or you don’t… [And so on.]

My claim about where libertarian-gogglers tend to stand in such debates begs the question of why the libertarian goggles should influence one’s thinking in this way.  Before writing all this out I imagined introducing my answer to this in a big reveal that might sound clever.  But actually, since I opted for abstract descriptions without real examples, the way I’ve written it already renders the connection pretty obvious.

Libertarian goggles impede one’s ability to recognize the legitimacy of circumstantial factors in choice-making.  Since they highlight freedom in choice-making abilities, external influences (genetics, upbringing, physical/mental conditions, surrounding societal forces, etc.) fade into the background.  When someone decides to do something, an observer wearing libertarian goggles sees the event of that choice clearly without considering the backdrop of events leading up to it.  Such events include other nodes in the causal chain, or restrictions placed on the choice-maker, or aspects of the life of the choice-maker which have led them up to the point of making said choice.  The scenarios I laid out above are all variants of this kind of blind spot.

Now libertarian goggles don’t render the wearer completely unable to perceive the presence of extenuating circumstances surrounding a decision.  Libertarian-gogglers (or at least most of them) aren’t so delusional that they entirely refuse to acknowledge that certain conditions or prior events might make things easier or more difficult for the people they’re observing.  What they refuse to acknowledge is the notion that such factors affect in any way the essential freeness, and therefore the attached moral responsibility, associated to the choices themselves.  In other words, even if they view the factors as factors in some physical or psychological sense, they don’t fully recognize their influence in a metaphysical sense with ethical import.  I assume there’s some limit to the degree of distortion provided by even the strongest libertarian goggles out there — for instance, hopefully the wearer would recognize that the classic scenario of having a gun held to one’s head is a factor that sharply reduces autonomy and the weight of moral responsibility.  But I often suspect it’s the case that the distortion can be severe enough in a number of cases to stretch the wearer’s perceptions to the edge of what “basic social common sense” allows.

The upshot is that the libertarian-goggler will survey an event that resulted from human choices and zoom in on exactly one of those choices that lead to it.  Which of course is exactly what I was arguing against here, via some sort of “argument by symmetry” showing that there are no grounds for arbitrarily privileging one node over others.

III. Reaching the logical conclusion

I’ve already alluded to the obvious potential for certain malicious types of bullying that can arise from abusing the guidelines outlined above for navigating life via the libertarian-free-will route (which I would describe as a very narrow path with “tough love” on one side and “overt disgust for those doing worse than you do” on the other).  Let me now mention a related nasty behavior that just naturally appears at the end of the path the libertarian-goggler travels on.  It is often colloquially referred to as “victim-blaming”.

Victim-blaming occurs when a crime is committed against someone and that someone, the victim of the crime, is met with moral judgment for having failed to act more wisely in order to prevent that crime.  I claim that this unfair reaction is essentially the natural logical conclusion of libertarian-free-will-colored thinking.  It is precisely what can result from a habit of wrongly isolating particular agents in a complicated situation as the bearers of primary responsibility.

In my experience, true, explicit victim-blaming (as in actually placing the blame on the victim rather than merely pointing out that they would have been better off doing something differently) is relatively rare on the individual level, although many other types of rhetoric are easily mistaken for it.  However, I’ve definitely seen plenty of the more abstract variant of blaming a governing system for not managing to sufficiently enforce rules against things that will hurt them.  On a small scale, this can take the form of cheating and finding various minor shortcuts that go against the rules because of the unlikelihood of getting caught and/or light punishments for those who are, and then defending one’s behavior on the grounds that “if they cared that much about us not cheating they would do a better job of enforcing the rules”.  Here someone is ignoring all the potential constraints on the rules enforcement system which is running things, and ultimately putting the blame on the people running that system, rather than themselves, if things go badly (that is, if they cheat and this harms someone else).  And yes, a few years ago I began to notice that the people I knew who seem to explicitly or implicitly endorse rationalizations of this kind were the ones whose views on current issues seemed the most influenced by libertarian goggles; in fact, I think that very observation is what started me down my train of thought that has led me to writing this post today.  And I’m willing to bet that there’s a strong correlation between full-on victim-blaming and libertarian goggles as well.

But lest I sound like I’m preaching from a high horse, I can definitely point to a blatant example of this behavior in myself.  During my first year in the city where I currently reside, I was living just outside of the city and was very dependent on public transportation.  A lot of my daily movements were within official city limits, but my home was just outside of them.  Very annoyingly, the monthly fee for transportation passes costs twice as much when including the zone surrounding the main city.  Each month I paid to recharge my transportation card for within city limits only, with the intention of finding ways to avoid using the one last bus that went outside the limits to take me to my apartment.  But eventually I succumbed to laziness and developed the habit of taking that bus anyway, despite the fact that I had no valid ticket for it.

The bus system here is run in such a way that nobody ever checks for tickets except for controllers who only board buses very occasionally.  The fine for being caught without a ticket is some 30 Euros.  My decision to pay for only the cheaper monthly pass and ride dirty for that one route proved to be a rational one from the point of view of personal finances: visits by controllers are so rare that I was only caught once during that year, and the 30-Euro fine I paid was far less than the money I saved by not paying for extra-urban access.  But more interesting was the way I constantly tried to justify this choice of breaking the rules on an ethical (rather than rational-self-interest) level.  I kept pointing to the fact that obviously this city does a very lackadaisical job of enforcing payment for bus rides.  And somehow I convinced myself (and even still sort of halfway believe) that if I was doing harm by getting illicit rides on that bus, it was really the city’s fault for failing to deter me from it — this was often dressed up as “the fact that they clearly don’t try that hard to enforce it means it must not matter very much to many people”.  Never mind the fact that the city legislators probably have their hands tied in ways I can’t imagine, or that it’s those who are lower on the rungs of the economic ladder rather than the rule-makers that were likely to suffer indirectly from my actions, etc.  Sometimes selective blame-assigning can be so tempting.

If you have read up to this point, you may be objecting that what I’m describing in this section doesn’t really follow as a natural conclusion of what I detailed previously with a node directly pointing to the rightmost one being singled out as the ultimate cause of some effect.  Surely when a crime is committed, the final decision involved in the causal chain is that of the criminal or rule-breaker, so the libertarian-goggler would blame the criminal rather than the victim?  My answer is that I’m not contending that the libertarian-leaning mentality dictates that one should necessarily point the finger at the one whose decision came temporally last.  My thesis is that the libertarian-leaning mentality disregards the difference between agency and ultimate responsibility and singles out one decision as carrying the moral weight, based on whichever one is most convenient to single out.  In a lot of contexts, this is one of the decisions which comes temporally last with an arrow pointing directly to the effect, because absent other deciding features of the situation this may seem like the canonical choice.  In other contexts of personal involvement, the most convenient agent on whom to load the blame is one which is definitely not you or your friends, and preferably one which is remote and faceless (e.g. “the city” for not disciplining bus-riders effectively).

The point is that the libertarian-goggler, wanting to focus on someone’s freedom of choice not bound by other forces, finds themself in the tricky position of selecting one node in the diagram as representing freedom in the truest sense, because the model of absolute freedom begins to break down when considering more than one agent in the same picture.  And sometimes this means they have to be a little bit arbitrary in their selection process.

The world through libertarian goggles can appear an exciting and beautiful place, where everyone has indefinite unfettered potential and is empowered to overcome any seeming obstacles in their way to achieve what they desire provided they desire it badly enough.  But one consequence of denying that difficulties can be legitimate hindrances is that we all feel entitled to withhold help from the less successful lest they drag us down instead of pulling themselves up as we want to keep doing for ourselves.  In the end we face the danger of finding ourselves in a world where blame is bestowed entirely on those who fail for their failures regardless of unfortunate circumstances; credit is doled out only to those who succeed regardless of luck and privilege; and those who climbed their way to the top through whatever means they could get away with feel justified in looking down upon those whose heads they stepped on.  In short, it is a world which legitimizes the domination of the weak by the strong.

Disagreements are like onions II

(or “Why we shouldn’t put all our arguments in one rhetorical basket”)

[Content note: Pulse shooting, homophobia, Islamophobia, gun issues, fundamentalist Christianity, and, sadly, more Donald Trump. A bit on the disjointed side, and perhaps best read as three separate sub-essays.]

As the title suggests, this is a direct follow-up to my last post, “Disagreements are like onions“.

I. Separation, period

…What was I saying? Oh yes, I think all of this can be generalized a little further. In the other post, I suggested that we should make a priority of separating the object level from the meta level, or different “degrees of meta”, when analyzing a given disagreement. One obvious challenge that could be raised against this thesis is whether for any two “layers” of an argument one is really more “meta” than the other in some obvious way. For instance, in the example I gave in the other post about separating the possibility of Trump not being the rightful president from the possibility that his executive orders were wrong, it doesn’t seem that clear whether “legitimacy of election” is the meta-level issue while “morality/legality of executive action” is the object-level issue or vice versa. And it doesn’t really matter — the arguments I was giving were for separating the two, without necessarily applying any particular asymmetric treatment to them.

So the moral of the story as I see it is even a little simpler: just try not to conflate different layers. And now, “layers” is not meant to imply hierarchy with respect to any axis. Considering this in terms of object/meta level distinctions was useful, because it seemed to me that an awful lot of this conflation was between layers that differed in levels of meta-ness, but this isn’t always so.

When we strip away all the talk of object and meta levels and just talk about “levels”, the primary reason for the fallacy becomes even more apparent. A person who is defending a position with many levels is often tempted to throw all of their eggs into the basket of their favorite one, which is often the one which feels easiest to defend.

Although this behavior seems extremely common and I’m sure I’ve been guilty of it plenty of times without realizing it, some of the most blatant (and kind of hilarious) examples of it which come most easily to my mind involve fundamentalist Christian apologetics of the most extreme and crackpotty kind. For instance, I remember hearing an open-air preacher on a university campus who was carrying on, in his slow, booming voice, by giving a rendition all of what he considered to be the principal sinful behaviors of us students. It quickly became clear that homosexuality held a position of special status among this horde of evil lifestyle choices, because apparently every single other one was a special case of it. “Extramarital relations is what happens when you give in to your baser passions, so that is a form of homosexuality. Same with pot-smoking, so that is a form of homosexuality. Social Darwinism is also a form of homosexuality. Being a Democrat is a form of homosexuality. Mormonism is a form of homosexuality…” And so on and so on. Now the issue of same-sex attraction isn’t in any obvious way more or less “meta” than questions surrounding these other supposed evils. But it was certainly a hot-button issue at the time as well as evidently this preacher’s specialty, so it was convenient for him to frame absolutely every idea he wanted to attack in terms of homosexuality.

(On a purely comical note, I’m reminded of a Canadian friend who facetiously explained to me that where he grew up, not only do bears represent the epitome of danger, but every threatening thing up there is in fact, at least in some indirect way, a form of bear-ness. As far as I’m concerned, this assertion is really no less ridiculous than that of the evangelical preacher above.)

And while extreme fundamentalist Christians are on my mind, does anyone remember the young-earth creationist Kent “Dr. Dino” Hovind?  His “doctoral dissertation” is available in pdf format online and is another quintessential example of bundling all of one’s ideological opposition into one narrow category.  Apparently, every non-Christian idea that Hovind disliked was yet another face of the “religion of evolution”, throughout all 6,000 years of our world’s existence, from Cain and Abel to the ancient Greek philosophers to Galileo to the origins of Communism.

But atheists have been known to engage in this kind of thing as well.  Around 2012, there was an attempt made by part of the atheist community to splinter off into a group called Atheism Plus, comprised of atheists who wanted to stand up for certain specific humanitarian values outside of the very basic brand of humanism that generally goes hand in hand with a positive lack of religious belief.  Although this new movement was advertised by luminaries such as Dr. Richard Carrier as being based simply upon the sentiment that as a group they should stand up against bad behavior on the part of members of the mainstream atheist community, it seemed clear pretty early on that the intent was to bind atheism together with the beliefs of the then-emerging online social justice movement. I can’t help but feel that by attempting to make such object-level beliefs an inherent part of what it meant to be an atheist, the advocates of Atheism Plus were muddying the distinction between the core of a skeptical belief system and adherence to the particular social and political ideas that they liked. I considered the attitude that an atheist committed to social justice shouldn’t be willing to march for secularist causes alongside other atheists who didn’t see exactly eye-to-eye with them on all social issues to be divisive, and I feared that it would weaken both the battle for freedom from religion and the battle for social justice. And it seemed clear that a lot of this arose from a desire (conscious or subconscious) to sneak in a lot of specific tricky, controversial views under the banner of general skepticism, which is a much more easily defensible value at least in a room of committed nonbelievers.

One Atheism-Plus-related essay that stuck in my mind was this manifesto (long, but altogether quite an insightful and relevant read for this discussion, although ultimately I disagree with it).  Here is a particular excerpt whose essence stayed with me years later:

I saw in skepticism a great deal of potential, too. It was a community that had until recently been very much based in the “hard” sciences and in addressing the more objectively falisfiable beliefs that people held, like cryptids, UFOs, alt-med and paranormal phenomena. But I saw absolutely no reason that skepticism couldn’t be compatible with the social justice issues I also cared about, like feminism. I saw in feminism a lot of repeated mistakes made due to a lack of critical inquiry and self-reflection, and rejection of the value of science and that kind of critical thought, and I also believed that a whole lot of what feminism, and other social justice movements, were trying to address was very similar kinds of irrational beliefs and assumptions, stemming from similar human needs and limitations as beliefs in the paranormal. Misogyny, sexism, cissexism, gender binarism, racism, able-ism… these things didn’t seem meaningfully different to me from pseudo-science, new age, woo, religious faith, occultism or the paranormal. All were human beings going for easy, intuitive conclusions based on what they most wanted or needed to believe, and on what most seemed to them to be true, without that moment of doubt, hesitation and humility that skepticism encourages.

What I felt skepticism could offer all of us, in enabling us to cope with our faulty perceptions and thought, was a certain kind of agency. An ability to make a choice about what we believe instead of just going with the comfortable and most apparent truthiness. And in allowing us that agency, in allowing us that choice… we could make the right choices. Instead of settling for what we are, how we tend to see, think and believe… we could try to be something better. We could look to what we could be, to how we could see, think and believe.

In other words, the writer, Natalie Reed, saw certain social justice stances as following from the same skeptical mindset from which atheism also follows and therefore as a necessary biproduct of performing atheism “the right way”. To me, this seemed in tension with what she said in the very next paragraph about freedom and ability to choose beliefs; clearly, Reed saw only one right answer to certain non-deity-related questions and was frustrated that the atheist community as a whole was failing to embrace it.  Here she didn’t come across to me as possessing the Theory of Mind to see that the skepticism that might lead others to non-belief in gods might not lead to non-belief in all of the other things she was skeptical of, or that other skeptics might even consider parts of her socially liberal ideology to be examples of “truthiness” which deserve more skepticism.

Anyway, to leave the arena of religion for more mainstream politics, I’ve also seen left-wing rhetoric along the lines of “being pro-gun is wrong because if you think about it, the presence of guns stifles free speech, which is one of the pillars of our democracy”.  To me this argument appears to be reaching pretty far by making a pretty indirect connection between gun control and a more popular and easier-to-defend American value.  I’m sure that this kind of argumentation is pervasive in right-wing spaces as well — probably lots of bending-over-backwards interpretations of various proposals as boiling down to “more government control” or something like that — but having had very little exposure to those spaces during the last decade, I don’t really know. I see no reason not to suppose that it is present in most ideological communities.

II. Another reason not to draft all arguments as soldiers

In this more general context of separating layers, my point (2) under section III of the last essay (“Upholding a principle that belongs to one ‘layer’ of the disagreement only on grounds of being in the right at another ‘layer’ isn’t upholding the principle at all”) reminds me a lot of something I wrote on my tumblelog (my Tumblr blog) back last August.  I link to it here and insert a more up-to-date revision of it as follows.

One major thrust of the rationalist approach to winning arguments is to avoid the “arguments are soldiers” mentality — that is, the attitude that every argument for one’s side of a debate, whether good or bad, is an ideological weapon and all must be deployed if one is to win on the political battlefield.  The argument against using arguments as weapons is itself a call for separating the object from the meta, but I see another objection: namely, that the use of “arguments as soldiers” oftentimes implicitly weakens the good arguments for one’s own side.

To give an example of this, I’m afraid I’m going to dredge up a horrible event from last summer: the Pulse shooting (~50 people killed at an Orlando nightclub).  I was traveling at the time it happened and wasn’t able to research all the updates on what was or wasn’t known about the killer hour by hour, so for a few days I was relying on what was popping up on my Facebook newsfeed.  As tragedies go, this one was especially tricky to respond to rhetorically because in the immediate aftermath, as there were so many potential political elements of it pertaining to all sides: in particular, Islam, homophobia, and guns.

Within a day, my Facebook was blowing up with articles giving particular views of the very sparse information we had on the killer at that moment.  The main two groups contributing to the political discussion seemed to be liberals who wanted to play up his homophobia and conservatives (as well as a few anti-Islam liberals / libertarians) who wanted to play up his Muslim-ness.  At the time, judging from preliminary reports I saw trickling in, the levels of both of these traits were unclear.  There were rumors in the early hours of the aftermath that he himself was a regular at the club, and that he had a gay dating app on his phone.  Meanwhile, while it was clear that he was a Muslim, he was raised in America, it wasn’t so clear exactly how strong his ties to ISIS and “radical Islam” were.

I’m going to focus now on the emphasis on the killer’s homophobia, mainly because the people pushing it were the ones on “my side” of most issues and vastly outnumbered the others anyway.  Now there’s nothing wrong in the fact that people were focusing on his homophobia.  After all, it’s extremely important to investigate exactly why someone would perform such an evil act, and it’s completely appropriate for us to feel outraged if part of the motive came from such vile bigotry.  And in fact, it looks like these people turned out to be right: he did choose a gay nightclub out of a desire to attack gays, and he certainly wasn’t a regular or openly gay, etc.  But suppose the evidence had come out differently: would it weaken the gay rights cause in any way?  It would not make gay rights one iota less valid if this guy had shot up a gay club out of pure sadism rather than directed bigotry.  I guess maybe it would make the gay rights cause seem an iota or two less worthwhile, because some of the practical value of a cause lies in how many lives will be affected by it (there’s some importance in demonstrating that homophobia kills).  But I’m going to suggest that even that is only affected a tiny bit, since those 100 lives are still a pretty small fraction of all those who have been killed for being somewhere on the queer spectrum.  My point is not that I was bothered by so many people drawing attention to it (after all, as I have said, this was absolutely appropriate and essential), but that there was this almost-desperate underlying tone implying of “see, this is why homophobia is bad, and this is why gay people deserve equal rights”.  I know that wasn’t actually what anyone was saying or probably even thinking, but that tone does in my opinion sort of communicate an attitude that the validity of gay rights is conditional on exactly which tragedies have arisen from not acknowledging them: if new evidence were to come in showing that the killer wasn’t anti-gay, then where would that leave us?

This reminds me of the common tactic that atheists use in debate where they make a big point of how many lives have been destroyed in the name of religion, implying that this is why religion is incorrect.  I’ve actually seen Richard Dawkins open a debate on the existence of God with this strategy, then backtrack when he sees his debate opponent is formidable at rebutting that point, saying, “But counting up the number of lives lost due to a particular ideology doesn’t really matter anyway; all I care about is which belief system is true!”  (Unfortunately I can’t recall which debate this was, but I wouldn’t be surprised if it happened more than once.)  Well then, Dr. Dawkins, why didn’t you start by arguing that way in the first place?  In this failed rhetorical maneuver, Dawkins has actually damaged the argument against religion as being antithetical to the objective pursuit of truth by implicitly making this point of view seem delicate, as thought it needed to be backed up by statistics on the number of deaths resulting from the failure to choose secularism.

Or, to give another example from the 2016 election campaign, I noticed that many people seemed very anxious to show that Donald Trump was never a competent businessman at all, as though that was the main factor relevant to his candidacy.  As far as I know, a lot of the memes supposedly demonstrating that he hasn’t actually done anything impressive with money were misleading, but I couldn’t actually care less either way because I saw much, much more crucial indications that he was not fit to be president.  I realized that there was some sense in trying to rebut the supporters of Trump who painted him as a savvy businessman, but displaying it in the front and center of the anti-Trump case seemed to me like a confusion of priorities and actually sort of validated the pro-Trump contention that being successful at business qualifies someone for the presidency.

To summarize, when arguments are used as soldiers in this way, it not only often leads to bad arguments being used, but it weakens other, extremely valid points supporting on the same side.  Then if the bad arguments are eventually knocked down, there’s not quite as much left on display in support our cause as there would have been if we had stuck to emphasizing the core reasoning behind it in the first place.

In other words, putting all one’s rhetorical eggs in a single basket (i.e. a particular aspect of one’s worldview) is a risky business.  At worst, the basket will break and the rhetorician will lose the whole debate despite the fact that some of their other stances were valid.  And at best, the single idea they’re classifying everything else under will come out looking correct, but sneaking all the other ideas in under it might come across as shady and underhanded, and those other ideas might not get the acknowledgment or credit they deserve.

III. A postscript on the March for Science

Tomorrow a lot of my American friends will be participating in a march which is purportedly a protest against the new presidential administration’s blatant disregard for some of the less popular findings of science in favor of pseudoscience and general “truthiness”.  While I am all for the original cause of this demonstration, I tend to have misgivings about protests in general.  A lot of these misgivings have something to do with what I’ve been discussing above: it seems that such protests are often billed as being about something at least sort of specific, but then a bunch of other statistically-correlated beliefs wind up getting lumped in with the original cause.  This appeared to be the case for instance with the American “Occupy Wall Street / 99 Percent” movement in the earlier part of this decade, for instance (inasmuch as that movement started out with any specific position in the first place).  It was also apparent at the Women’s March back in January (hello, intersectional feminism!).  I’m not saying that I was actually against any of these demonstrations, and in fact I think that at least some (such as the Women’s March) had wonderful effects.  But I’m bothered by the fact that such protests have a tendency to devolve into a shouting platform that enforces the clustering of a whole bundle of political positions rather than a unified, focused, and concretely-reasoned push for a particular goal.  I’m a member of a Facebook group dedicated to the March For Science, and I’ve certainly already seen a lot of posts there championing areas of science, or even tangential science-related causes like better representation of minorities, etc., which don’t seem directly relevant to the main crises at hand.

That said, the theme of this particular event, Science, is itself of interest when considering the issue of “separating layers”, because the spirit of Science seems in a certain sense to uphold the opposite value to the one I’ve been preaching here.  That is, the idea behind Science is that we are trying to explain empirical phenomena in terms of the most elegant possible models based on natural laws which apply universally.  In other words, Science is on some level all about not considering different questions independently.  For instance, it is often pointed out that to be consistent in one’s denial of biological evolution, one must also deny the validity of a wide range of scientific areas including geology and particle physics.  So I can’t really fault all the posts I see along the lines of “I march because without science we wouldn’t have the medical technology to treat my leukemia!”, even though it would be unfair to directly imply that support for the strains of pseudoscience peddled by the current administration automatically implies opposition to improving the lives of leukemia patients.  After all, the same respect for the scientific process that has led to so many widely celebrated inventions and breakthroughs ought to be applied when it comes to more politically controversial scientific findings as well.

Anyway, it will be interesting to see exactly how tomorrow’s event shapes up.  I guess that as far as my insistence on “separating layers” applies to this situation, I would say that it’s important to realize that it is possible for intellectually honest people to disagree with the scientific consensus on some (object-level) issues without necessarily opposing the (meta-level) values of the scientific process itself.  However, those of us who feel worried about what appears to be a pervasive disregard for science, who feel that people who hold to popular “truthy” beliefs not supported by scientists while otherwise tacitly supporting the scientific process are oftentimes operating on an inconsistent belief system, are certainly quite justified in wanting to engage in peaceful demonstrations against these worrisome modes of thinking.  Or at least as justified as I am in wanting to write long, rambling blog posts about what I consider to be worrisome modes of thinking.

18033965_10213215715533949_4007728607424155360_n(credit to Kendra Hamilton on Facebook)

Disagreements are like onions

[Content note: this is another attempt to convey one of those fundamental ideas which I feel strongly about deep down but is still a little hard to communicate, so I once again erred on the side of long and dry.  Part 1, hopefully to be continued.  Some political examples, especially Trump-related; how can I resist?]

Finally I’ve gotten around to writing the remaining lengthy, cerebral post I’ve been wanting to get out of my system right from the get-go (really, it’s been in my system for a lot of my life).  I want to talk about object levels versus meta levels and Theory of Mind and everything that comes with it.  I’m worried that this post may become overly long and sprawling because it’s such a far-reaching topic in my view, but at least there’s one thing that makes life a lot easier here: a number of people whose blogs I follow have touched on this directly or indirectly in their writings many times.  By pointing attention to such things, they have done a lot of my work for me.  Also, I’m going to postpone a few of the ideas I have in mind to be put in a second post.

Here is a list (nowhere near exhaustive) of what I consider to be some of the more crucial posts of Alexander’s which address the general issue of Theory of Mind / Object-Meta Distinction in one way or another:

There are many, many more essays written by Alexander and others which apply these principles without quite so directly acknowledging them.  In particular, I’ve seen this from other prominent rationalist community members like Ozy (who runs the blog Thing of Things) as well as from Rob Bensinger, although off the top of my head I can’t produce any links since they both write prolifically in a lot of different places and I don’t have such a good memory for their individual articles and/or comments.  This post is my attempt to unify all of these points expressed by them and others into one concept.

But first, here is a series of example scenarios of a variety of flavors in order to motivate the idea.

I. A collection of very short stories

In recent years there have been a number of controversies surrounding high-profile individuals who hold views that are unsavory in some way or other and who were punished for saying those views, by losing their job for example, or just by not being allowed a microphone.  “A Comment I Posted on ‘What Would JT Do?'” addresses one of these cases, where Duck Dynasty star Phil Robertson was fired for voicing highly offensive views.  In it, Alexander expresses frustration with the network for suspending Robertson, arguing that regardless of what side we’re on, we should adhere to the norm of responding to views we don’t like with counterarguments rather than silencing.  Alexander later came to the defense of Brendan Eich when he was fired as CEO of Mozilla for similar reasons.  Much more recently, there has been a lot of discussion in the rationalist community about the forceful protests against the very presence of certain alt-right-ish speakers at universities.  Most seem to agree that regardless of how one feels about what we might call the “object-level situation” (Robertson or Eich or these speakers’ “object-level” positions that we don’t agree with), we should give priority to certain “meta-level” rules (e.g. allowing the opportunity for proponents of all beliefs to take the podium).  Although it’s clearly not quite that simple.  Because, waving aside the whole issue of the “free speech” defense being flawed when “freedom of speech” is understood in the most literal sense, there are some individuals, like possibly Milo Yiannopoulos, who have strayed beyond simply expressing their views into outright bullying.  There seems to be a fine line between speech that is offensive to some groups and actual threats to the safety of members of those groups.  So how exactly do we separate the “object level” from the “meta level” in situations like these?

There has been a particular theme in the debates I’ve (probably foolishly) gotten into with friends over a lot of things relating to the new presidential administration in America.  Many are arguing that we right-thinking Americans who are anti-Trump should refuse to acknowledge Mr. Trump as our president altogether.  They are more or less saying, as I understand it, that the horrid views he has Trumpeted were sufficient reason for various other authorities to have barred him from becoming president in the first place through some sort of brute force, to have refused to go to his inauguration, and get him impeached as soon as possible.  It seems pretty revealing to me that in the midst of some of these “not my president” arguments, the fact that Trump has almost certainly done many highly illegal things is thrown right in with policy positions such as being anti-abortion or (allegedly) anti-gay-rights.  While I agree that he’s “not my president” in the sense of not representing anything I stand for, I vehemently oppose the calls for immediate impeachment, as long as it’s motivated by pure principle rather than objective legal reasoning.  My main argument has a lot to do with how the other side will view what would look like purely political strong-arming in the highly unlikely event that such efforts actually succeed.  I don’t think anyone could completely deny this concern, but apparently I hold unusually strong convictions about the particular importance of considering how other people’s minds will process our behavior.

A few weeks ago I was asked an interesting question by a friend, also pertaining to the American political situation.  We were talking about speculations that some Trump campaign officials engaged in illicit communications with Russian agents, thus swinging the election in his favor.  My friend put forth the idea that if it is ever proven beyond reasonable doubt that Trump won the election through illegal means, then his executive orders should be considered illegal purely by virtue of the fact that he isn’t the rightful president.  I replied that I disagree with this proposal.  Trump’s action as president should be evaluated purely on their own merits (legal, moral, etc.), given the fact that he somehow got into the position he’s in.  In other words, I want our judgments of his becoming president and each thing he does as president to be evaluated as independently as possible.  That way, if we mess up our evaluation of one, this doesn’t affect how we react to the others.  Besides, I believe that both the travel ban and the disastrous first attempt at executing it (these two aspects can be judged separately as well!) were despicable and deserving of harsh judgment quite independently of whether Trump’s presidency itself is legitimate, so it just doesn’t seem fitting somehow for Trump to face legal consequences for the travel ban purely on the grounds that something unlawful was done in his presidential campaign months earlier.  Besides, again, one should consider what his supporters would make of us punishing him for a multitude of actions using the singular strategy of somehow convincing enough people that he never really got elected.

Now let’s move to personal drama of a sort that I’ve seen play out more times than I can count.  Suppose that Alice and Bob are in some kind of close relationship, and Alice gets upset with Bob about something and, let’s say, starts berating him in a tone that somehow goes over the line or with a lot of vulgar language or just generally in a borderline-verbally-abusive way.  Bob disagrees with the reasons why Alice is upset but focuses his resentment around the unacceptable way she talks to him when she’s angry.  Alice’s rebuttal is to point out that Bob yells at her in an equally unpleasant way when he’s upset with her for any reason, and she gives some past examples to lend evidence to the point.  Bob replies that those times were different because for X, Y, and Z reasons, he was right in those arguments and therefore justified in his nasty tone and diction, whereas today she’s wrong in her arguments and thus has no right to talk to him that way.  They are — or at least Bob is — conflating two issues here which should be separate discussions: the specific things they get into arguments about, and the way they talk to each other when they get angry about such things.

I know someone who has insisted multiple times that the word “insult” refers not merely to saying nasty things about someone, but to saying nasty things about someone that are unwarranted.  I have looked up the definition of the verb “to insult” in multiple dictionaries and have asked several others what they consider it to mean, and all evidence points to this person being wrong about the definition of “insult”.  But setting aside explicitly agreed-upon uses of words and the confusion that results from going against them, let’s grant that we can define terms in whatever way we choose as long as we’re consistent about how we use them.  To define “insult” as a valid description of a certain unpleasant behavior only as long as it is unjustified given that particular situation weakens one’s ability to separate a personal dispute into two disagreements (the particulars of why they are arguing, and the way they talk to each other when angry) as in the case of Alice and Bob above.  Insisting on such a definition of “insult” betrays a certain mindset.

(Interestingly, I was corrected on my use of “flattery” several times when I was younger, because I understood it to mean, well, more or less the opposite of “insult” regardless of sincerity or validity of the claim of the flatterer, while I was told that an effusive compliment doesn’t count as flattery if it’s actually obviously true.  This does seem more or less in keeping with dictionary definitions of “flattery”, although it looks slightly different from the “insult” situation situation since “to flatter” is meant to carry a connotation of insincerity.)

II. Separation of degrees

I believe that lying at the heart of all the situations described above there is a fundamental concept in common.  Sometimes we might talk about it in terms of “meta levels” and “object levels” (e.g. Alice and Bob have both an object-level disagreement but also have a problem on the meta-level about how they work through disagreements).  I’ve developed a habit of using this language quite a lot actually; I’m always telling myself that I’ll look back on this writing one day years from now and cringe thinking it looks sort of rhetorically immature to refer to “object” and “meta” things so often, but right now it still often seems like the best way to make my point.

At other times, we might speak of Theory of Mind as explained in some of the links I gave above (e.g. we have to operate on some consideration of the minds of Trump supporters).  I claim — and I hope to argue here at least in an indirect way — that both of these ways of analyzing disagreements point to the same underlying fallacy.

Out of all the rationality-flavored topics that I care about and have been writing essays on, this one lies closest to my heart.  I remember first feeling an awareness of the fact that I innately processed certain arguments in seemingly a very different way from the (equally intelligent and much more experienced) people around me at around the age of 12.  These disagreements were all of the flavor of the scenarios described above, where my frustration was with those who didn’t seem to realize that there are certain general rules which we all must agree to follow regardless of who is right or wrong in a particular dispute, because all parties are equally convinced that they’re right.  And that it’s no good to criticize a person you’re disagreeing with for not following some general rule on the grounds that they’re wrong about specifics when they don’t agree that they’re wrong on the specifics; in fact, it’s bound to further irritate them and push them away.  By the start of my teenage years, being bothered by this was already starting to feel like a major hangup that I was almost alone in suffering from, and part of me hoped and expected to outgrow it.  Yet here I am.  I can’t explain precisely why I’ve always felt as intensely about this as I do, although it’s clearly related in some way to the Principle of Charity, as in Scott Alexander’s framing in some of links above (or to my modified Principle of Empathy).

When I first ran into the rationalist community, perhaps the number one reason I started identifying with the individuals therein was that they all seem to intuitively grasp what I’m getting at here.  Sure, some might disagree with how I’m framing it in this essay (maybe because my framing is arguably not the most valid, but more likely due to lack of lucidity in expressing these concepts), but I never fail to feel assured that they get it.  Of course, “it” is rarely directly discussed in purely abstract terms rather than in the context of a particular concrete topic.  But like I said at the beginning, “it” exists as a thread running through the writing of Alexander, Ozy, and many others.

So is there a way of framing this in more definitive, purposeful language than “there’s some object- vs. meta-level thing or some Theory of Mind stuff going on here”?

Well, let’s start with Scott Alexander’s arguments on seeing issues in terms of object and meta levels in his writing which I linked to above, particularly in the “Slate Star Codex Political Spectrum Quiz”.  (Warning to anyone reading this who hasn’t gone to the link yet and is interested in taking the quiz: I’m about to “spoil” it.)  Here Alexander posits a series of questions, each of which describes a brief political conundrum and gives two choices as to how to proceed.  The catch is that he has cleverly paired the questions into couples which depict scenarios that are very similar on some “meta” level while (very roughly) the roles of “object” level political positions are switched (e.g. a question about a visit by the Dalai Lama being protested by a local Chinese minority is paired with a question about a memorial to southern Civil War veterans being protested by a local African-American minority).  The final score on the quiz is computed using a system that gives the quiz-taker one point for answering “the same way” on a pair of questions, thus displaying meta-level consistency.  The final evaluation is given as follows:

Score of 0 to 3: You are an Object-Level Thinker. You decide difficult cases by trying to find the solution that makes the side you like win and the side you dislike lose in that particular situation.

Score of 4 to 6: You are a Meta-Level Thinker. You decide difficult cases by trying to find general principles that can be applied evenhandedly regardless of which side you like or dislike.

Many have undoubtedly taken this, along with Alexander’s many other articles which seem to take the “meta-level side” (applying general principles across the board including when he doesn’t like the side whose rights he’s supporting), to imply that he favors meta-level thinking over object-level thinking and that we’re all “supposed to” score a 6 on the quiz.  I think I myself interpreted Alexander’s tone this way for a while.  Then I realized that this isn’t necessarily the right lesson to take away from it.  I can’t speak for Scott Alexander’s exact position here, but I do distinctly recall Rob Bensinger remarking in a different comment section that the Slate Star Codex Political Spectrum Quiz serves as an eloquent rebuttal to the attitude that one should always operate on the meta level.  I guess it depends on how one feels about the particular questions asked in the quiz, but I do have to agree that the correct message shouldn’t be to only think on the meta level.  Sometimes there are exceptional object-level circumstances which change the meta-level rules slightly.  For instance, if our Alice and Bob from above are a married couple who have agreed never to try not to let their voices rise above a certain volume when fighting with each other, then one of them might be justified in bending this meta-level rule just a bit in the fight that ensues after finding out that the other, for instance, just gambled away their entire joint life savings without asking, or has been cheating with seven other partners.

Also — this is a much more superficial objection that is easy to remedy — but of course it doesn’t make sense to consider any conflict to have exactly two levels, the “object” one and the “meta” one, because real conflicts are often complicated enough to involve many degrees of “meta-ness”.  For instance, two nations which are run on competing political philosophies (e.g. communism versus capitalism, in this case an object-level disagreement) may try to avoid war with each other in the absence of a particular type of threat or provocation (avoiding force is a meta-level rule), but in the case that they do declare war, they may try to follow international laws pertaining to conduct in war (as in the Geneva Conventions, meta-meta-level rules).  And after all, Alexander talks about an indefinite number of “steps” in the above-linked post on an “n-step theory of mind”.

So we should view any disagreement as likely having many layers of meta-ness, like an onion.  (One may consider the more “meta” layers as being closer to the center of the onion, but I sort of prefer to think of going outward as one gets more “meta”, since meta-level considerations should be a bit more all-encompassing).  And there is no hard-and-fast rule as to some level which will always take precedence over all others in judging any disagreement.  Instead, I think the correct message boils down to something even simpler: we should be aware that these different layers of a disagreement exist; and we should address them all separately in our arguments (even if they aren’t entirely independent).  For a long time, to myself I’ve been referring to this as “separating levels” or “separating layers” or even “separating degrees of meta”.

Where does Theory of Mind come in?  Well, in my experience the general way to fail at the goal I set out above involves disregard for the fact that others’ minds work independently from one’s own.  After all, the most common way to conflate these layers is to insist to one’s opponents that what should be uniform meta-rules need only be applied selectively, depending entirely on the object-level situation.  And it seems to me that the best way to justify this to oneself is to forget that one’s opponents hold differing convictions on the object-level situation which feel just as genuine as one’s own.  That’s basically, by definition, displaying a lack of Theory of Mind.

III. What goes wrong?

When claiming something as a fallacy, I believe it’s always good form to explain why the fallacy leads one astray as well as why people persist in it despite the fact that it leads one astray.  (It’s also nice to suggest a positive solution, but in this case, I don’t have any bright ideas beyond the self-evident “that mode of thinking is wrong, so don’t do that thing”.)

When thinking over why I don’t like it when people “conflate layers” of disagreements, I can’t help treating “reasons why this conflation is logically invalid” and “reasons why this conflation is bad rhetoric which will push people away rather than win arguments” as interchangeable.  Here are a couple of points which may fit one or both criteria.

1) Defending one’s stance on a meta-level issue using one’s stance on object-level issues won’t actually convince anyone not already on board.  If two parties disagree on the object-level issues (which I usually take to be the matter of disagreement which started the conflict in the first place), then for one party to defend their behavior of breaking some meta rule on the grounds that they are right on the object-level issue is a waste of breath.  From what I’ve seen (and from what I feel when this is done to me), it only makes the other party more angry and frustrated.  A valid argument uses premises that everyone involved agrees on and then uses those to convince one’s opponent of something they didn’t agree about.  An attempt at an argument based on a premise that one’s opponent never agreed on is bound to completely fail at accomplishing this.

2) Upholding a principle that belongs to one “layer” of the disagreement only on grounds of being in the right at another “layer” isn’t upholding the principle at all.  This can be seen in my second example with the Trump administration, where using the illegitimacy of Trump’s election to indict him for an executive order sort of implicitly excuses the illegality of the order itself.  Or, going back to our friends Bob and Alice, if Bob says, “I still think you’re wrong on the issue we were fighting about, but much worse than that, the names you called me are completely unacceptable!”, and Alice points out that Bob calls her similar names from time to time (perhaps even in that same fight), and Bob replies, “But I was justified in talking to you that way because there you were wrong!”… well then Bob is essentially implying that there’s nothing innately bad about calling someone those names at all.

Or to take a slightly more universal example, when a child lies to their parent about having done something wrong, the lesson handed to them is often something along the lines of “The naughty thing you did isn’t nearly as bad as the fact that you lied about it!”  But if the child soon afterwards catches their parents themselves lying to avoid getting into trouble for something they did, then justifying it on the grounds of not thinking their crime was actually bad, then there’s a risk of the child coming away very confused about the wrongness of lying.  And I’m not saying that there isn’t a circumstance where the parents’ words and actions might still be completely justified — there are some things that are against the (object-level) rules but which may still be morally right and okay to lie about (i.e. these “layers” do sometimes interfere with each other).  But a parent in this situation should at least be aware of the confusion that might result when laying down a blanket (meta-level) rule that lying is always wrong even when you’re trying to get out of trouble for doing something you feel was okay.

IV. Why do we go wrong?

I expect one could always cite the usual reason where people are prone to not thinking clearly, and to not having a strong Theory of Mind, especially when this allows for rhetoric which seems to work in their favor in the heat of the moment.  As for something more concrete, I think “conflating layers” mainly boils down to one major temptation.

Tying together two different issues in a disagreement allows one to justify oneself based on whichever one is easier to defend.  It’s easier to argue against homophobia itself than to argue purely on the meta-level that someone doesn’t deserve a public platform, so many don’t want to make the effort to separate the issue of the unsavory views of Robinson and Eich and their ilk from the issue of whether they have a right to keep their jobs despite their views.  If we obtain proof that the Trump campaign actually did clinch the election illegally, it will be easier to convince everyone that Trump isn’t the rightful president than to demonstrate that his travel ban was wrong, so a lot of us would feel inclined to use the illegitimacy of Trump’s presidency to condemn his attempt at the travel ban.  It may be easier during a particular argument to defend one’s object-level stance than to defend one’s use of nasty insults, so it’s tempting to define the term itself to depend on one’s rightness or wrongness on the object level.

In other words, while one can’t judge the layers of every argument completely independently, by treating them as all part of one singular issue of controversy it becomes way too easy to get away with all kinds of rhetorical shortcuts, so that one can defend one’s stance throughout the whole onion based only on the most easily justifiable layer.  It enables a bait-and-switch behavior which is similar to (or perhaps just a particular flavor of) the motte-and-bailey tactic.

…and actually, I believe all of this can be generalized slightly further, but I’ll save that for another post which (I hope) will appear here soon.

Agency does not imply moral responsibility [the brief version]

[Content note: uncharacteristically short and sweet.]

The object of this very short essay is to concisely state a proposition and brief argument which I refer to frequently but was lacking a suitable post to link to.  This is one of the central points of my longest essay, “Multivariate Utilitarianism“, but it’s buried most of the way down, and it seems less than ideal to link to “Multivariate Utilitarianism” each time I want to make an off-hand allusion to the idea.

Here is how I would briefly summarize it, using the template of a mathematical paper (even though the content won’t be at all rigorous, I’m afraid).

Proposition. The fact that an agent X acts in a way that results in some event A which increases/decreases utility does not imply that X bears the moral responsibility attached to this change in utility.  In other words, agency does not imply moral responsibility.

Proof (sketch). One way to see that agency cannot imply moral responsibility in a situation where multiple agents are involved is through the following simple argument by contradiction.  Suppose there are at least two agents X and Y whose actions bring about some event that creates some change in utility.  If X had acted otherwise, then this change in utility wouldn’t have happened, so if we assume that agency implies moral responsibility, then X bears responsibility (credit or blame) proportional to the change in utility.  By symmetry, we see that Y also bears the same responsibility.  But both cannot be fully responsible for the same change in utility — or at least, that seems absurd.
One naïve approach to remedy this would be to divide the moral responsibility equally between all agents involved.  However, working with actual examples shows that this quickly breaks down into another absurd situation, mainly because the roles of all parties creating an event are not all equally significant.  We are forced to conclude that there is no canonical algorithm for assigning moral responsibility to each agent, which in particular implies the statement of the proposition.

Remark. (a) The above argument seems quite obvious (at least when stated in more everyday language) but is often obscured by the fact that in situations with multiple agents, usually only one agent is being discussed at a particular time.  That is, people say “If X had acted differently, A wouldn’t have happened; therefore, X bears moral responsibility for A” without every mentioning Y.
(b) A lot of “is versus ought” type questions boil down to special cases of this concept.  To state “circumstances are this way, so one should do A” is not to state “circumstances should be this way, so one should have to do A”.

Example.  Here I quote a scenario I laid out in my longer post:

[There are] two drivers, Mr. X and Ms. W, who each choose to drive at a certain speed at a particular moment (let’s call Mr. X’s speed x and Ms. W’s speed w), such that if either one of them goes just a bit faster right now, then there will be a collision which will do a lot of damage resulting in a decrease in utility (let’s again call this y).  At least naïvely, from the point of view of Mr. X, it doesn’t make sense in the heat of the moment to compute the optimal change in w as well as the optimal change in x, since he has no direct control over w.  He can only determine how to best adjust x, his own speed (the answer, by the way, is perhaps to decrease it or at least definitely not to increase it!), and apart from that all he can do is hope that Ms. W likewise acts responsibly with her speed w… If y represents utility, then our agent Mr. X should increase x if and only if ∂y/∂x is positive.  After all, he has no idea what Ms. W might do with w and can’t really do anything about it, so he should proceed with his calculations as though w is staying at its current value.

That’s what each agent should do.  I’ve said nothing about how much either of them is deserving of praise or blame in the outcome of their actions.

The proposition states that in fact without knowing further details about exactly what the two drivers did, we have no information on how blameworthy Mr. X is for the accident.


To state it (or perhaps overstate it) bluntly, I cite this “agency ≠> responsibility” proposition in an attempt to remedy what I believe is a ubiquitous fallacy at the bottom of many if not most misunderstandings.  I wish everyone in the Hawks and Handsaws audience a Happy New Year and look forward to writing more here in 2017!


Confronting unavoidable gadflies

[Content note: An elaboration of something I’ve tried to describe before.  I didn’t even try to avoid serious political issues this time.  Welfare, death penalty, generational conflict, religion.]

This is a follow-up to “Speculations of my inner gadfly“.

In my earlier gadfly-related post, I tried to describe an idea that had been buzzing around in my head for some time (pun intended?  I’m not sure) which helps to describe how I view certain types of disagreements and bad arguments.  I think it turned out to be one of my better-written entries for this blog and by some measures seems to have been the most popular.  And yet, when I look back on it, I feel like I was mostly pointing out something already obvious to everyone (despite my repeated hedging of “I don’t mean only to point out the obvious here…”) and didn’t manage to really capture of the essence of the common role of “gadfly speculations” as I see it.  This post will be in large part an attempt to clarify my ideas by taking the whole “gadfly” concept in a slightly different direction.  (By the way, most of the terminology and metaphors I’ve come up with so far for expressing my thoughts on this blog make me wince, but I think I actually like the general gadfly metaphor, so I’m going to run with it as long as it doesn’t wear out.)

I. The inevitable truth of grand-scale speculations

Before really getting into the meat-and-potatoes of this post, I need to clarify one important point.  In the other gadfly-related essay, I described inconvenient, perhaps ridiculous-sounding possibilities which may or may not turn out to be correct (and very often aren’t) but stressed that we have to face them anyway rather than brush them aside.  I pointed out that you can always evaluate their likelihood later, but it’s important to at least let them enter your conscious consideration first.  While this certainly wasn’t an invalid point for me to make, I’m afraid it may have been misleading in terms of conveying the way I usually think of “gadfly speculations”.

The fact is that most social controversies that we find ourselves considering involve large numbers of humans and their motivations, the effects that a certain course of action may have on them, and so on.  In these situations, practically every possibility that realistically occurs to us regarding the way some humans might act is correct, but perhaps only for a small minority of the humans involved.  In fact, as soon as such a speculation occurs to us, unless it’s completely bonkers at the level of lizardmen conspiracy theories, it must be true at least occasionally or at least for a few people.  In fact, it would seem very strange if it were never true.

For a real-world example, take the constant debate over government-provided welfare.  Fiscal conservatives tend to argue, or at least insinuate, that a number of citizens on welfare are using these government programs to game the system in some way.  And regardless of our political affiliations, when we stop to objectively consider this, we have to agree that in a certain literal sense this is correct.  The key phrase in the proposition mentioned above is “a number of”.  It’s not clear exactly how many people are gaming the welfare system.  Maybe they are so few as to be irrelevant when the benefits of having a social safety net are taken into account.  But if we have a country where millions of citizens are on welfare, and the welfare system is pretty complicated, then it stands to reason (or at least common sense) that there is a feasible way to abuse it and that some of those citizens are in fact abusing it.  It would really be astounding if nobody were abusing it.

Similarly, if we all assume for the sake of argument that certain sufficiently heinous criminals “deserve” the death penalty (I put “deserve” in quotes because I don’t really know what that means, but that’s a topic for another post), then we all have to admit, regardless of our stances on the death penalty, that the proposition “Some defendants will be wrongly convicted” is correct.  The key word is “some”.  This is a weaker example than the last one, since far fewer humans have been sentenced to death in modern history than are on welfare, but I still suspect that the forensic science involved is so complex and still imperfect enough even today that there must be wrong convictions at least occasionally.  I would be astonished to find out that there have been zero wrong convictions in the last several decades.

Now I realize that there are far more outlandish suggestions out there regarding every controversy that affect so many people’s lives, and maybe it’s plausible that some of the most extreme ones don’t hold for any of the humans involved.  For instance, I seriously doubt that a single one of the millions of individuals on welfare is secretly trying to trying to aid a band of extraterrestrials bent on taking over the earth through weapons which can be powered only by government-signed welfare checks.  However, most speculations this far out in left field aren’t pervasive in the common discourse and generally don’t enter our minds (even subconsciously) in the first place.

So these uncomfortable thoughts that gadflies persistently whisper to us generally don’t have a chance of being completely false.  In fact, as soon as we hear them, we are obliged to admit that it would be quite shocking for them to be entirely false.  Evaluating them becomes a question of to what degree and on how great a scale they are true.

I reiterate what I said in the other post: we tend to dismiss these inconvenient ideas out of hand because acknowledging them means more work for us in our assessment of any situation, and our brains are lazy.  If we acknowledge that at least a few folks will abuse the welfare system, then that obligates us to go through a tricky cost-benefit analysis when arguing in favor of it, which is considerably more difficult than emphasizing more and more stridently that welfare provides necessary aid to many citizens.  And yet, if we at least attempt to argue that abuse of the welfare system is sufficiently rare, then that obligates our opponents to rebut that with an attempt to show that such abuse is unacceptably frequent (rather than argue against welfare simply by complaining that it can be abused), and a potentially productive discussion ensues.

There is an anolog of this notion in the context of small-scale conflicts — say, drama between two individuals — as well: many of the possibilities that try to latch themselves to our minds are almost certainly true on some level.  For instance, if it occurs to you that the reason your friend didn’t show up to your party has something to do with an unintentionally rude remark you made to her the week before, then that is probably playing some role (however small) in her behavior, even if the primary reason for her absence turns out to be an unusually high level of work-related stress.  But this doesn’t apply in nearly as absolute a way as it does for issues involving more people.  And for the purposes of this post, it’s mostly large-scale debates that I’m interested in.

II. The inevitable use of grand-scale debate tactics

Now let’s kick it up a level: in debates which involve a large number of humans, pretty much any speculation about how the opposing side will argue must be correct.

A. The Boomer-Millenial Conflict for Dummies

Here’s a good exercise for considering how a given position might be argued: pretend that you’re an alien with no knowledge whatsoever about human history or problems but who wants to argue a particular side of a human controversy of which you know only the basic definitions of relative terms, with the minimum possible extra research.

Take, for instance, the constant rhetorical warfare between the baby boomer and millenial generations.  Suppose you were an alien knowing nothing about American culture, generational subcultures, or any of the dynamics involved.  You only know the definition of “baby boomer”: it’s a human born during the “baby boom” from the mid-40’s to the mid-60’s, which is so called because of a marked increase in the birth rate.  How would you go about attacking baby boomers?  Well, let’s see, the first thing that comes to mind is that because by definition there are a lot of them, they are to blame for what in some people’s minds might be a dangerously high population.  But you can’t go far with this criticism, because nobody can be reasonably held to blame for having been born.  So what occurs to you next?  Well, again, tautologically there are a lot of baby boomers; they make up a disproportionately large portion of human population.  So if there’s any fault that baby boomers are likely to be prone to, it might be… that they have an over-inflated sense of self-importance, or they behave as though everything is about them, or something.

And sure enough, it’s not hard to find articles like this one, or books like this (see Chapter 7).  I also distinctly remember the preachy right-leaning political comic strip Mallard Fillmore characterizing baby boomers this way (clumsily paraphrasing from memory: “This just in: baby boomers have finally realized that society doesn’t revolve around them!  Unfortunately, they now think it revolves around the federal government.”), but after half an hour of searching for old Mallard Fillmore strips with roughly those words, I can’t find it.  And yes, if I google “baby boomers”, the first attack articles I find are ones which accuse baby boomers of ruining the economy for millenials, since a lack of jobs for young people is the biggest specific issue at play in the inter-generational war right now.  But one has to admit that the hypothetical alien who knew nothing about our current economic woes did a pretty good job at coming up with an anti-baby-boomer talking point which is actually used substantially in the real world, given a bare minimum of knowledge regarding the baby boomer generation.  The “think everything revolves around them” allegation isn’t the primary criticism nowadays, but it is still relevant in the discourse.  That talking point may not usually be backed up by explicitly claiming the source of their perceived self-importance is that there are disproportionately many of them.  But the fact that baby boomers comprise a prominent demographic certainly strengthens the credibility of the “think everything revolves around them” criticism.

So if one who is looking to defend baby boomers goes through the above exercise, the result is a gadfly speculation on opposing debate tactics rather than the facts of the generation-war issue itself: “But the opposition might try to frame things in terms of baby boomers thinking everything’s about them!”  And this turns out to be true, to some extent.  For any controversial issue about which many people are arguing in public from all different sides — or even when only two people are debating, but both are passionate and knowledgeable about many aspects of it — any hypothetical talking point that comes to mind in this way will play at least a minor role.

I like the baby boomer example because one can already come up with a possible criticism by considering only the definition of “baby boomer”.  Usually it requires knowing more than basic definitions, but only a little more.  For instance, if you want instead to attack millenials, and imagine yourself as an alien searching for a good anti-millenial talking point based on a minimal amount of research, one only has to learn about one of the main issues involving millenials today: they complain about a dearth of jobs and general broke-ness.  Now forget the specifics of what they’re complaining about, and ask yourself, what’s the easiest route to discrediting someone who complains?  By claiming that they feel entitled, of course (see below).  Or how does one go about lampooning someone who has trouble finding a job just generally falls into some kind of bad fortune?  By portraying them as lazy, or irresponsible, or lacking in judgment or initiative, etc.

B. General examples

Here are some broad examples of opposing rhetorical tactics which are bound to show up, each of which applies to a variety of real-life debates.

  • “This media outlet / group has a pro-X bias!” vs. “Reality has a pro-X bias!”: I’m starting with this one because I think it might be the most pervasive of all of my examples.  If one party complains that the media or a particular outlet of it is biased in some way, then regardless of specifics, the most obvious strategy for rebuttal is to claim that its portrayal of the situation reflects how things really are.  This is particularly visible in conservative criticisms of the media (or particular news outlets) as having liberal bias, which instigates the response that “reality has a liberal bias”.  It is also a prominent feature of the evolution vs. creation debate, as well as other disputes between skeptics and defenders of academic consensus.  When one party makes an accusation of bias, their opposition is pretty much guaranteed to counter that the source isn’t biased but right.  The flip side of this is, of course “This high-profile source says X is true!” vs. “That source must be biased then!”
  • “We have a legitimate grievance!” vs. “You’re just a bunch of whiners!”: This is the hallmark of debates that hinge on reverting to deterministic or free-choice explanations for a current unfortunate situation.  Closely related is the inevitable attack of “your bad fortune is your own fault” aimed at the aggrieved.  There are too many real-world controversies involving this for me to name here, and in fact I’ve tried to argue before that this is a component of all Left-vs.-Right political issues in America.  Nowadays the concept of “privilege” and related terminology usually shows up throughout these disputes.
  • “We got here by hard work!” vs. “You got there by unfair advantage!”: The flip side of the above rhetorical template.  Also frequently seen in disputes over privilege and free choice vs. determinism.
  • “We deserve better!” vs. “You’re just entitled!”: Also closely related to the grievance/whiners exchange.  If one isn’t up for countering that the other party’s bad fortune is manufactured because they’re looking to complain or just their own fault anyway, then one can take this route.  Whatever “entitled” even means.
  • “Our lived experiences have made us wiser!” vs. “Your lived experiences have made you paranoid / naïve!”: I’ve seen this show up in a lot of more personal conflicts — by claiming experience as evidence of wisdom, one opens oneself up to suggestions that experience can distort one’s perceptions to one’s disadvantage as well.
  • “Person/group X sounds overconfident / refuses to admit mistakes!” vs. “Person/group X is just really smart / hasn’t made a mistake!”: This is a variant of the example above.  I remember it being a major theme of the discourse last decade during the Bush administration.  A further variant is “Person/group X is closed-minded!” vs. “Person/group X just won’t put up with nonsense!”  These stances are often taken by the “teach the controversy” anti-evolutionists versus the “creationism isn’t science” defenders of Darwin’s theory… although interestingly the roles were pretty much reversed back at the time of the Scopes Trial.
  • “You’re afraid to debate!” vs. “We won’t descend to your level by engaging with you!”: Closely related to above.  Another major component of the creation/evolution conflict (yes, creation/evolution provides many good examples).  Epitomized by Richard Dawkins’ refusal to debate the “ignorant fool” Ray Comfort.  However, I’ve seen show up in the context of many other topics where one side sees itself as far more educated than the other.

C. Debating debate tactics: the “motte-and-bailey” debacle

Some of the common recurring themes mentioned above come close to describing not only potentially fallacious tactics used to debate an issue but even to debates over potentially fallacious debating tactics.  It seems not uncommon in discussions between rationalists for one party to accuse the other of a committing a particular fallacy — say confirmation bias, or assuming a strawman — only for the other to point out that sometimes what looks like confirmation bias or a strawman happens to reflect the truth anyway.  To show that I don’t always fail at finding cartoons posted online that I remember reading once, here is a relevant Calvin and Hobbes panel (apologies to Bill Watterson).


If someone argues using language that sounds overly-broad, it’s almost certain that their opposition will accuse them of the fallacy of black-and-white thinking.  But in some way or another, the first party will very likely retort, like Calvin in the panel above, that sometimes that’s just the way things are.  (By the way, Watterson has stated that this cartoon was inspired by his own struggles in a legal dispute in which he was accused of black-and-white thinking.)

To give a more interesting example of something that caused some disagreements within the rationalist community, in one of his more popular posts, Scott Alexander characterized certain types of rhetoric as relying on a fallacy that he calls “motte-and-bailey”, which refers to equivocation between one very convenient sense of a term (assumed most of the time) and a different but much more defensible sense of that term (adopted whenever challenged).  The “motte-and-bailey” terminology was actually coined in an academic paper written years earlier, but Alexander’s article popularized it within the online rationalist movement.

Some months later, his fellow rationalist essayist Ozy banned the use of this concept on their blog Thing of Things, later writing this to further elucidate the potential pitfalls of using “motte-and-bailey”.  Evidently the term was being abused a lot in Thing of Things comments sections.  But here’s the conundrum: any new concept can be abused in some way.  When introducing a new concept, even the concept of a certain logical fallacy to an audience comprised of rationalists, one should always be able to imagine the ways it will be abused and recognize that given a large enough audience, it will be abused in that way.  In the case of “motte-and-bailey”, it is a good exercise to ask ourselves what might be the most convenient way to use it to attack any position one doesn’t like.  Well, the substance of the concept is that a “motte” is a defensible definition of a term which can be quickly adopted when one’s ideas are challenged (“God is the feeling of purpose we perceive in the universe”), while a “bailey” is a convenient definition tacitly assumed otherwise (“God is the petty, vengeful main character of the Old Testament”).  The point is to criticize one’s opponent for defending their ideas by using a defensible (“motte”) definition which they don’t assume the rest of the time.  So it seems all too tempting to… criticize one’s opponent for using a defensible definition even when they do consistently assume it all the time.  (Maybe you’re arguing against a very liberal theist who really does believe only in the “vague purpose” kind of God, and Old Testament fundamentalism is a strawman of their belief system.)  So in other words, exactly the abuse that Ozy described having seen.

If you introduce a new rhetorical concept to a bunch of rationalists, there’s a pretty good chance of somebody invoking it unfairly to attack arguments they don’t like; then there’s also a pretty good chance that someone else will anticipate the possibility of this abuse and unfairly invoke that to attack arguments they don’t like; and the recursion goes on ad infinitum.  Maybe “motte-and-bailey” also happens to be easily abusable to begin with.

But all that doesn’t mean that useful concepts like “motte-and-bailey” shouldn’t be popularized in the first place.  And I guess that brings me to my usual “proposed solution” section of this essay.

IV. How to oppose opposing gadflies

I’ve tried first to make the point that when participating in discourse on certain types of broad issues (particularly social), almost any statement inconvenient for our position that might occur to us is probably true to some degree and moreover will occur to at least some people on other sides who will use it against us.  This makes my view of success at discourse, or even being sure what one believes in the first place, sound pessimistic.  And it is, somewhat.  Becoming reasonably sure of something and being able to actually convince others of it in an intellectually honest way is (at least for me) very, very hard.  But there are still ways of dealing with those gadflies that almost surely oppose us.

First of all, there’s one of the oldest debating guidelines in the book: anticipate opposing arguments.  I spent a lot of time illustrating certain very general types of claims that are sure to be encountered (“your grievance is your own fault”, “so-and-so sounds confident because they in fact are always right”) because, despite the fact that they sound completely obvious when written down in this context, many people in the heat of argument often don’t see them coming because they’re not thinking enough from their opponent’s point of view.  So anticipate them.

The second, and probably more difficult, tactic is to realize that these inevitable counterclaims are probably at least a little bit true and to readily acknowledge this.  That’s not to mean that constantly bending over backwards to agree that every criticism and accusation is kinda-sorta valid is an effective way to win anyone over to one’s position (I err in this direction a lot, so I would know).  But flatly denying that the offensive thing one’s opponent was bound to suggest is almost certain to make things worse.

So the best strategy is probably to admit that our opponent’s suggestion is probably correct for a few people, or just a little bit, and claim (and then make an honest effort to back up the claim) that our position is right anyway.  “Yeah, any welfare system opens itself to the possibility of abuse by a few people, and that’s awful.  But it’s far more important for honest people in need to be able to have a safety net of this kind, because X, Y, and Z.”  Or, “yeah, that group sometimes whines a little more than justified, but they have a legitimate complaint even so because Y and Z.”  Or even, “Yeah, I know that I can moan and be a little melodramatic at times, but that doesn’t mean that my feelings are invalid in this case, because X.”

This is particularly worthwhile, but particularly tough, when one is confronted (or anticipates being confronted) with a personal attack.  There’s a common reaction, which I’ve observed in people close to me, of “On top of being completely wrong about [issue on the table], he has the nerve to keep bringing up such-and-such personal flaw of mine.  He’s lost all credibility with me about [issue], so the personal attack is obvious nonsense.”  (Here the personal fault in question is often something that many have criticized the speaker about and which maybe even the speaker has acknowledged in calmer moments.)  In my opinion, this is almost always the wrong way to look at the situation.  If I’m arguing with someone in my life about Big Important Issue on which I believe they’re totally mistaken and out of line, and they keep shoving in my face some criticism of me that others have made in some way or another, and which I’ve previously acknowledged is somewhat true then… I try to recognize that they’re probably right in their criticism.  They wouldn’t be using the criticism as a weapon to argue their side of the Big Important Issue if it weren’t somehow readily available to them, and it wouldn’t be so available to them if it weren’t somewhat true.  So my response should be to acknowledge immediately that “yeah, I sometimes can be that way” but argue that my faults still don’t imply their side of the Issue, or (in some cases) that they’re completely irrelevant and being used easily but unjustly as a weapon against me.  Of course I still fail at this from time to time, but my successes have gradually made admitting my own faults in this way much easier.

The thing is that no matter how small of a gadfly is staring us down, our adversary can still hide behind as long as we dismiss it even while it tells just a tiny bit of truth.  Engaging with the gadfly actually exposes our adversary and leads to a more productive outcome for everyone involved.  And that is a bit more of my take on why it’s important to welcome gadflies into our minds.

Obligatory election-day post on the rationality of voting

[Content note: Again, the title pretty much says it all.  Minor discussion of religion-inspired ethics.]

There are a number of rhetorical situations where I see recurring patterns of what feels like obviously fallacious reasoning and have learned that trying to convince someone who doesn’t instinctively sense that same pattern will lead only to frustration on the part of both parties.  But in many cases, I have discovered through the rationalist community a group of people who all seem to acknowledge the same underlying issues, even if there’s plenty of healthy disagreement on exactly where and to what extent those fallacies are being committed and as to what antidote should be applied.  Some of these things I’ve even tried writing about in my own words, such as the mistake of confusing causal agency with moral responsibility in multivariate situations or the subconscious tendency to not acknowledge inconvenient hypotheses.  I can’t exactly take a poll of how everyone reacts to these rationalist topics that I bring up, but it certainly appears that most people who are interested in rationality and have the patience to engage in discussions of them are in rough agreement despite perhaps disagreeing with how I describe or apply things.  It hasn’t proven controversial to claim things like “There’s a fundamental problem with how people assign moral blame in situations where more than one party created a disaster” or “One shouldn’t shun inconvenient thoughts before they have a chance to fully form” or even more philosophically contentious positions like “By debating the degree of ‘free-ness’ of certain actions rather than what our reaction to them should be, we are asking the wrong question.”

I have recently discovered that such is not the case when it comes to my rationality-motivated objections to how many people think of voting.

A few months ago, I brought up my contention that people often seem to abandon consequential utilitarianism when it comes time to vote on a Slate Star Codex open thread.  I posted the following comment:

I’d like to put in a request for a post (preferably sometime between now and the election) on the motives behind abandoning consequentialist utilitarianism when it comes to voting. It seems like most people accept consequentialist utilitarianism as a matter of course for most choices, but then treat voting almost as a mode of self-expression.

In case it’s not clear, I was alluding here to my long-time frustration with those who say they’ll vote only for candidates they positively like, rather than for candidates who are able to win or the lesser of two evils, etc.

At the time, I was assuming that everyone would basically agree with me but point me towards a good explanation or at least a better way of phrasing the problem.  To my surprise, I found that my assumptions were completely mistaken regarding the general rationalist community sentiment when it comes to voting, or even when it comes to consequentialist utilitarianism.  As one commenter said,

If you think that people are “abandoning consequentialist utilitarianism when it comes to voting”, then that doesn’t just mean you’re completely confident you’re right about the consequentialist utilitarian consequences of voting, it also means you think that reasoning is so obvious that you expect everyone else to think the same way. This is absurd. Even in this thread there is a broad range of opinions on this matter.

I learned a lot from the responses I got to the above-linked comment, and other online discussions on optimal voting strategies that I’ve witnessed since have further opened my eyes to the variety of viewpoints rationalists hold on this general topic.

A lot of the crux of our differences can seemingly be traced back to different takes on variants of Newcomb’s problem.  I decided after the aforementioned discussion on Slate Star Codex that I would research Newcomb-like problems and try to further cement some sort of opinion on it along with solid justification, in time to write an incisive, well-argued, polished blog post on the rationality of voting in time for the presidential election.  However, I failed to do my homework here and have not made much progress on understanding the different points of view on these topics.  Therefore, once again I don’t quite have the incisive, well-argued, polished blog post that I wanted and have decided instead to make do with an attempt to succinctly write down my current thoughts maybe from a more personal angle.  Maybe this is for the best, because sometimes I suspect that indefinitely delays in an effort to do the ideal amount of research and thinking will lead to me writing something that still falls short of feeling ideally incisive, well-argued, and polished, while I often wind up happier with my more personal, thoughts-in-progress writing anyway.

So here are the main issues which seem to play into the question of what it means to vote rationally, along with my and other people’s thoughts on them.

I. The assumption of utilitarianism

I’ve embraced utilitarianism as the only reasonable source of ethics since I was old enough to ask myself what my source of ethics was (which I guess was around high school or so).  I realized pretty quickly on discovering the rationalist community that utilitarianism, specifically consequentialist utilitarianism, seems to be the dominant belief within it.  Results from surveys such as this one seem to bolster this impression, but note that this survey shows 60% of the participants as being consequentialists, which leaves a lot of room for other views to be influential.

In the aforementioned comment thread alone, there was plenty of argument against my assumed consequentialism, which if nothing else convinced me that there are many more people with a commitment to rational thinking who don’t find it obvious than I had imagined.  Unfortunately I don’t quite understand most of these people’s points as arguments for a different, coherently-stated system of ethics.  It seems that many want to point out that humans do not in reality make most of their decisions according to consequentialism.  Most decisions, they claim, are impulsive and depend mainly on what “feels better” at the spur of the moment.  Maybe the reason why a lot of people vote is simply that it gives them a vague feeling of power in having a voice in their democracy.  In other words, they believe in the advice of journalist Bob Schieffer’s late mother.


My first reaction to this is that here, by claiming that consequentialism isn’t valid because it’s not how people actually make decisions, these commenters seem to be advocating a purely descriptive definition of morality.  For me, the obvious problem with this is that it ultimately leads to confusion between moral behavior and the way people actually behave on average.  Here I’ll leave it to the reader to insert whichever go-to example they prefer of crimes against humanity committed at a particular place during a particular time period in order to show that this notion is absurd.

But maybe nobody is claiming that common human decision-making behavior actually determines which ethical framework is valid.  Maybe their point is that the tendency of folks to act according to (non-utilitarianism-based) impulse in most aspects of their lives shows that they way they think about voting doesn’t contradict their ethical worldviews in the way I brought up in the open thread comment.  After all, if humans don’t in fact generally rely on consequentialism to make their decisions, then there’s no apparent contradiction when they say they’ll vote in whichever way makes them feel better or for whichever candidate better reflects their values.

To respond to this, I have to go back to the ultimate reason why I identify as a utilitarian, which I’ll do my best to explain briefly even though I can’t give an ironclad argument in its favor.  (Although, one shouldn’t expect a complete “proof” of any ethical system, since concepts of “rightness” and “wrongness” can’t be introduced without some axioms.)

The best personal explanation I can come up with is that utilitarianism seems like the only system for deriving ethical statements that has a completely coherent and self-contained definition, modulo the somewhat open-ended concept of “well-being”, or utility.  Therefore, when we humans consciously justify our decisions, we tend to imply in our explanations that we made the choice which led to a net increase in utility.  When we argue about whether our decisions were right or wrong, it boils down to conflicting opinions about which outcomes actually increase/decrease utility, even as the assumption that we all want to maximize utility is taken for granted.  So even impulsive decisions like choosing to stay in bed an extra twenty minutes after one was supposed to get up are either not justified at all (“I shouldn’t have stayed in bed late, but my tiredness just sort of took over”) or justified as having increased utility (“I stayed in bed late because it felt better for me, and it was worth it because of X, Y, and Z”).  I’m not saying that such decisions are made in the first place according to utilitarianism.  I’m saying that if they are consciously justified afterwards, they will be implicitly justified as actions which were likely to result in the greatest net change in well-being.  In my opinion, this is because such justifications form the only chains of reasoning which remain completely meaningful.

Yes, some people very deliberately take a non-utilitarian stance.  For instance, many believe in a god or gods as the source of all morality, and hold that “God forbids it” is reason enough not to do a particular thing.  But when pressed on exactly why God would forbid that particular thing, either the chain of reasoning must stop at “He/She/They has mysterious ways” or some sort of argument which appeals to something apart from the divine (“God says that stealing is wrong!  Why does He forbid it?  Well, how would you like to be robbed of things which you worked hard to get?  [etc.]”).

So yeah, I do think that most people, when they are calmly thinking over their own choices and not in the midst of acting impulsively, instinctively rationalize what they do in utilitarian terms.  They choose not to steal because it would do harm to the person stolen from, as well as contribute to societal instability where private ownership is concerned.  They choose to recycle because it’s better for the planet which in turn benefits every living thing on it in the long run.  They might even prefer a certain political candidate because their policies would be better for the economy and therefore increase the well-being of people within their constituency.  So my initial concern still stands: why do so many seem to back away from this sort of rationalization when considering their voting behavior?

(I’m happy to admit by the way that I see certain limitations in utilitarian reasoning, especially when it comes to issues involving creation or suppression of life.  Therefore, I don’t believe that this system of ethics provides good answers to questions relating to, for instance, abortion, or population control.  I’m not sure whether that means that I’m not fully a utilitarian, or whether one could derive some enhanced set of utilitarian axioms which would solve these problems.)

II. The assumption of one-boxer-ism

A lot of the rationalists I’ve been hearing from do seem to be on the same page as I am with regard to consequentialist utilitarianism, but still disagree with me on the purpose of voting.  They say that if the only reason for voting were to directly influence a current election, then there wouldn’t be much reason to vote from a utilitarian standpoint, since your one vote has an astronomically low chance of single-handedly swinging an election.  “All right,” one may ask them, “so why do you think so many people do take the trouble to vote, and do you feel that they are being reasonable in doing so?”  One plausible answer to this may be that voting still serves a practical purpose apart from directly determining elections as elections serve the function of polling the desires of the people.  If you vote for the candidate whose values you truly agree with, even if they are not one of the main candidates, that helps to send a message to the community of politicians which will surely do some good in the long run.

While I agree that voting does serve this purpose, and it might even be my main consideration if for instance I lived in a solidly non-swing state of the US, I still hold that a lot of the time it is trumped by the purpose of directly swinging current elections for the reason which I articulated in the afore-linked comment thread:

[P]eople mostly seem to understand the whole Prisoner’s Dilemma idea that if you decide to do something for a reason, then you should assume that many other people are making that same decision for that same reason, and that en masse voting is extremely effective.

In other words, I strongly believe, or at least some instinct inside of me compels me to strongly feel, that I should act in such a way that the best outcome might be brought about if all other like-minded people also act in that way.

It turns out that attempting to justify this strange conviction that one should act as one would like all like-minded people to act is tricky and runs into potential paradoxes.  This conundrum is encapsulated in Newcomb’s Paradox (of which the famed Prisoner’s Dilemma is a variant).  Like I said above, I haven’t gotten around to researching any of the volumes of argument on both sides of this problem.  I have read Eliezer Yudkowsky’s introduction, and someday I hope to take a look at his lengthy paper on it.  I would worry that only having read Yudkowsky’s analysis might have biased me towards his one-boxer position, except that it’s sort of clear that deep down inside I’ve been a one-boxer all along.  This is because the one-boxer position is the one corresponding to the “cooperate” choice in the Prisoner’s Dilemma, or the “vote so that like-minded people also voting that way would achieve the best outcome” choice in our Voter’s Dilemma.  And even though on close inspection it seems very non-trivial to justify, I see now that my whole life I not only felt convinced of it down to my bones but had been assuming that all reasonable people believed felt the same way.  In other words, it never occurred to me that anyone would argue against the notion that voting is good on the individual level because there are positive consequences when large groups of people vote a certain way, just as littering is bad on the individual level because there are negative consequences when large groups of people litter.

Currently the topic of Newcomb-like problems occupies roughly the same position for me personally as the topic of free will did about 8 or 10 years ago: it’s a problem for which I feel some strong intuition but haven’t yet managed to wrap my mind around all the implications or formulate a clear position and which I firmly believe has highly relevant real-life implications.  Applications to how to vote rationally are an obvious example of them.  See, for instance, this article which more or less argues a more sophisticated version of my position.

But yeah, I feel this way on a instinctual level, so deeply that I’ve been willing to put in significant time and effort in figuring out how to vote from abroad and why my faxed-in ballot apparently wasn’t legible on the first take and so on… all out of this weird faith that my willingness will somehow “make” other people currently in my situation find the same willpower.

But intelligent people don’t all think the same way in Newcomb-like situations.  This fact helps to explain a lot of attitudes about voting which appear irrational to me, and thus does give a partial answer to my original query.  Of course it does not help me to truly understand how such attitudes aren’t still, well, irrational.  Understanding that may require me to change my strongly-felt-but-vague positions on things like Newcomb’s paradox.  I don’t know whether this is an impossible feat or whether a clever enough argument (along with my becoming a clever enough person) would be enough to accomplish it.

III. “Immoral” voting

There is another small aspect of the “vote only for candidates you actually like” attitude where I think I can offer a little more insight.  I have noticed that some people go beyond just saying they don’t want to vote for any candidate that doesn’t meet their moral standards; they claim in fact that it’s downright wrong to vote for someone you don’t genuinely like.  I’ve heard language like “going against my morals” used to describe holding one’s nose and casting a ballot for the lesser of two evils, sometimes by those who choose to do it anyway.

I first want to be a little on the pedantic side and fault those who think that lesser-of-two-evils voting is immoral but wind up doing it anyway for being inconsistent.  Technically, I don’t see actions as being absolutely ethical or unethical in and of themselves; it is choices of certain actions over other actions or inaction that can be labeled as “right” or “wrong”.  If something is immoral, then that means that one shouldn’t make the choice to do it, period.  Or, to state the contrapositive: if one chooses to do X, then that means that X is more moral than other available actions or inaction, and therefore one’s choice was moral.  And although this criticism doesn’t directly apply to those who believe that voting for the lesser of two evils is immoral and then don’t do it, I think it still underscores some of the fuzzy thinking behind a lot of the sentiment against lesser-of-two-evils voting.

Secondly, in trying to put myself in the mind of someone who thinks of voting for a detestable candidate in order to oppose someone even worse is “going against their morals”, it occurred to me that there’s some sneaky variant of the “causal agency implies blameworthiness” (related to “is-versus-ought”) fallacy going on here which I made a point of in my post on “multivariate utilitarianism” (you have to scroll all the way down to subsection III(D), sorry).  It’s tempting to feel that if you voted for a bad presidential candidate, then you share some portion (however tiny) of the blame for them winning.  After all, you made a free choice which contributed to an unpleasant result which would not have occurred if you and other like-minded people hadn’t made that choice.  But that’s ignoring the fact that a decision between two undesirable options was foisted on you by circumstances, circumstances which were caused by other parties.  And so the brunt of the blame shouldn’t necessarily fall on you.  In fact — and this is one key difference between this situation and the ones I discussed in the post linked to above — you had no better options, so really none of the blame should fall on you.  Still I suspect that the idea that it’s inherently immoral merely to vote for an unattractive candidate has some of the same misconceptions underpinning it as the whole “causal agency implies blameworthiness” thing has.

IV. My endorsement on how to vote in 2016 (and in general)

It’s finally time to stop beating around the bush.  I chose the words of this section heading carefully: I want to describe how I think one should vote in elections in general (at least in countries like America which have a strong two-party system), not whom to vote for.

Here at Hawks and Handsaws, we are firmly against imposing our own personal political convictions on readers.  Therefore, I will illustrate an example application through a purely hypothetical situation.  Let’s say that we have a presidential election in which one candidate, whom we will denote by H, is a shrewd and very able politician mired in a corrupt political establishment who has a lot of potential skeletons in their closet and who is somewhat hawkish and not especially idealistic, in contrast to another politician we will call B who was their main opposition in their party’s primary election.  Let’s say that the opposing candidate in the general election is someone whom we will call D, who has never been a politician and generally proves themself to be a complete buffoon by repeating mostly-nonsensical platitudes with almost no actual substance behind them which yield not the slightest evidence that they understand anything about the challenges faced by their countrymen, who might be more hawkish than their opponent but you can’t really tell because their platform seems to be all over the place, and who on top of that has risen to popularity within a certain subset of the electorate by repeatedly producing outlandish bluster seemingly calculated to fan the flames of anger and bigotry.  Let’s say that you dislike both candidates H and D, but have to admit that D would be a considerably worse president than H would, although you would have strongly preferred B.  Then I recommend the following:

  1. Rewind back to the primary election that took place in your state between H and B.  You should vote for B in that primary if and only if they seem like the best choice after taking several things into consideration, including B’s likelihood of beating whomever the opposing party nominates, as well as B’s probably effectiveness at president.  You should not base your choice purely on the fact that B seems like a better person with better values.
  2. In the general election, no matter how much you may hate H, as long as you’re convinced that D is substantially worse, you should vote for H unreservedly and with a clear conscience.  No voting for third-party candidates even if their values align with yours much better than H’s do.  And no avoiding the polls altogether.  As a general rule, whenever you perceive a significant difference in attractiveness of candidates in an election, from the one-boxer utilitarian standpoint, voting is always imperative.
    (Note: this general idea is often articulated as “remember, a vote for a third-party candidate is a vote for D”, which is incorrect not only literally but also in the sense that really, a vote for third-party is equivalent to half a vote for D or to throwing away one’s vote altogether.  By symmetry, members of the pro-D camps will often claim that “a vote for third-party is a vote for H” when again it makes more sense to consider it as half a vote for H.  The fact that both can’t be true simultaneously is itself proof that neither should be taken quite at face value.  But obviously I agree with the underlying sentiment.  (Further note: of course I’m making the simplifying assumption in all of this that all we care about is directly affecting the current election; as I’ve acknowledged above, there are times when it makes good sense to vote third-party.))

The purpose of voting is not to serve as a form of self-expression, or of cheering for the team that you like.  It is not (in America, at least) even primarily a way to communicate to the political world what your ideal candidate or platform would be, except in certain circumstances where the overall result is a foregone conclusion.  The purpose of voting is to influence which individual out of a very small group of finalists will be elected to a position of significant power.  Yeah, I know that what I’m preaching is based on convictions which I haven’t been able to fully justify.  But even in the absence of solid argumentation, I’m still allowing myself to stand on my soapbox and proclaim how I feel about voting, on the eve of what looks to me like a pretty crucial election for America and for the world.

And with that, I leave you with a variation on the wisdom of Bob Schieffer’s mom: go vote; it’ll make you feel like a good one-boxer consequentialist.