The good, the bad, and the emergent neutral

[Content note: I have had an especially hard time expressing myself clearly in this post; I feel as though I’m currently lacking in the appropriate vocabulary for explaining what I’m groping towards, and I have the nagging feeling that this exact same concept is already treated properly in many sources.  If anyone can point me towards the correct way to talk about this, I may well choose to rewrite parts of this post in clearer language; I might even be tempted to change the title.]

During parts of college and early graduate school, I got mildly involved with some atheist/agnostic/some-general-category-of-nonbeliever clubs at my universities, primarily out of an appetite for hanging out with other students who wanted to ponder the kind of controversies that most interested me at the time.  At around the tail end of this period, when I was preparing to extricate myself from my second university’s atheist society for good, I was befriended by a young evangelical Christian who was about to leave the university and start a full-time career in missionary work.  His recent approach had been to reach across the aisle and engaging with his ideological opponents: the campus atheists.  He had attended several round-table discussions of the atheist club (which took some guts, I think; it’s not the easiest thing in the world to be the lone voice of Christianity surrounded by a bunch of college students who are very enthusiastic about criticizing it), and he had offered to take several members out to lunch for further one-on-one discussion.  I was one of the people chosen, despite (or perhaps partly because of) the fact that I identified as somewhat of an outsider and not a regular member.  Several separate arrangements were made; the atheist club registered mild curiosity about what conversion tactics our evangelical friend might employ on these lunches; and the campus Christian organization prayed for a good outcome to these meetings.

I probably wouldn’t have agreed to sit at a table one-on-one with a man with such a sincere passion for trying to save nonbelievers if it weren’t for the fact that I knew this particular guy to treat everyone with a very relaxed and laid-back demeanor.  And I wasn’t disappointed: we spent most of the lunch casually chatting about life in general and the atheist club in particular, with me registering a lot of complaints about how I didn’t feel I really fit in anymore with the general ethos and social dynamic of that space and him assuring me that he often felt similarly about the Christian community on campus.  When we were getting near the end of our food, however, he leaned forward and asked me a question, much more basic and straightforward than I’d really been expecting:

“As you know, Bertrand Russell wrote an essay called `Why I Am Not a Christian‘.  Leaving aside Russell’s views, I’m curious as to how you would answer that question.  Why are you not a Christian?”

I was slightly taken aback and quickly ran through several tacks I could take in responding.  Just a few years earlier in college, arguing over religious views was a regular hobby of mine, but nowadays I was feeling a bit like I had passed my peak in that department; I was burned out from trying to cut to the root of the drastically different premises that tend to guide skeptics and believers, and I had become overwhelmed with the limits of my own philosophical knowledge and debating abilities.  So while at one time I might have embarked on a defense of my methodological naturalism and how it made the hypothetical existence of a deity inaccessible and resurrection miracles improbable and so on, I decided instead to plead incompatible gut feelings.  I stated that from every Christian apologetic, including him, I would inevitably hear some specific belief professed whose reasoning went so completely against my intuition that I couldn’t help but conclude that Christianity was just not for me.

He asked me, well, what specific belief had he professed to make me feel this way?  For this I had an immediate reply.  He had recently been answering a series of challenges at an atheist round-table discussion on Original Sin and the Garden of Eden, mainly in the vein of how exactly did the world change from being completely free of evil to containing suffering and death?  Our local evangelist had informed us that the fundamental rules of physics must have been different somehow before the Fall of Man, so that the old rules didn’t allow anything whatsoever unpleasant, but the new rules (in particular, the Second Law of Themodynamics) are the ones we know today which make things like pain and death sadly inevitable.

As I told my unlikely lunch partner now, I didn’t exactly have a logical rebuttal to this claim, but when somebody says something that feels absurd on such an intuitive level for me, I feel like I have no choice but to get off the train.  Interpreting physical laws in this way, in terms of “positive” and “negative” not with a precise mathematical meaning like for electric charge but in the sense of humankind’s feelings, somehow goes directly against my core intuitions about how to model the universe.  And maybe there are other arguments out there for the concept of Original Sin that would be slightly more convincing to me.  But if this is generally the best that Christian apologetics can come up with, then most efforts to help me come around to that flavor of religious belief system are probably futile.

How exactly does one divide up physical or biological phenomena into “good” and “evil”?  Is pain inherently “bad” even though without it we would have a much harder time noticing physical harm to our bodies?  Or is the physical harm itself a product of the Fall of Man?  Maybe before the Fall, God had constructed all bodies to be impervious to sharp objects and severe force?  Maybe things like infections are a part of “evil” as well, even though they can be described by modern science as the flourishing of bacteria?  Evidently, a universe “free of evil” would have been free of bad things only for humans and cuter-looking non-predatory animals and certainly not bacteria or the vermin that feed on the deceased.  And of course, if death is inherently “bad”, so that there was no such thing as death or the Second Law of Thermodynamics before the Fall, then it’s incredibly difficult to fathom the specific mechanisms that govern that universe without simply describing it as “magical”.  One can cry “Lack of imagination!”, but doesn’t it kind of seem like the onus is on the one who claims that our current physical laws are a direct result of Mankind’s collective choice of going against God to offer a description of the original laws?

None of the above can really be judged as a watertight argument.  Some sort of model can be constructed that answers all those questions, together with a justification for assuming initially that the universe was created by an omnibenevolent deity and then setting out to explain the way the world works today.  The above plea is essentially an appeal to a different set of axioms from which I feel really unable to divorce myself.

Again, it may seem unfair to hone in on the claim of this particular Christian as a reason to reject Christianity altogether.  But I saw it as a low-hanging example of a much more general premise behind a lot of religious thinking (not just in Christianity) which is profoundly incompatible with my intuition.  I’m a methodological reductionist: for me the only kind of approach that makes sense is one which explains the things we see by breaking them down in terms of the simplest, most basic possible elements.  For me, the simplest elementary component parts that underlie natural or social phenomena don’t come in flavors corresponding to human motives, or human emotions, or “good” or “bad” as humans perceive those qualities, or anything directly from the point of humans.  But for many of the defenders of theism, elementary component parts appear in exactly those forms — or at least, they are described directly in terms of motives, emotions, and “good” versus “bad” from the viewpoint of some conscious Being.

This whole thing is of course a particular case of a much more general kind of fallacy: the tendency to interpret things on what we may call a “human level”, in terms of emotions and conscious intentions, rather than as emergent phenomena arising from the simplest possible mechanics like interactions between elementary particles according to mindless laws.  (Note that the latter mindset doesn’t require our modern knowledge of particle physics, just a conception of some kind of indivisible particle or “atom” which can be used for current models.)  The clearest description of this I’ve encountered is in this essay on mentalism versus mechanism; it might be characterized as some form of mind projection fallacy as well.  As far as I’m concerned, it is responsible for many elements not only of modern conventional religion but of mythology, general superstition, and common fictional tropes (e.g. anthropomorphic animals and objects).  But anyway, that’s a digression from the more specific thing I want to talk about today.

By the time of my meeting with the Christian evangelical described above, I had come to realize that what bothered me about most organized religion was not the purely metaphysical disagreement over the existence of gods or an afterlife.  It was the common religious mindset of interpreting worldly events as happening “for a purpose” and “according to plan” and somehow ultimately for the greater good, where “good” is interpreted in a very human-centric sense.  I believe in objective Right and Wrong as abstract properties of conscious actions, but not as inherent qualities of material or as substances weaved into the fabric of the cosmos, as they seem to be perceived under the theistic mindset.  So while my answer to my lunch partner was clumsy (rather clumsier than the way I’ve written it here), I guess I was earnestly grasping at the crux of my issues with religion in general and conservative Christianity in particular.

But my object today isn’t to criticize religion, even though I’m over 1,500 words in and that seems to be mostly all I’ve done so far.  As usual, my primary interest here is not in analyzing all-encompassing worldviews but in picking apart the judgments and arguments commonly used in everyday situations.  I brought up this whole “wanting to interpret neutral emergent phenomena in terms of ‘good’ or ‘evil'” thing because it’s a mentality I find myself complaining about in the midst of a lot of the discourse I see.  Mostly I see it appearing in the form of “X is clearly ‘good’ so it can’t be one of the biproducts of that thing we’re fighting against” or “Y is clearly ‘bad’ so we can’t possibly judge it as a side-effect of the idea we’re championing”.

I saw it when a friend of mine opined that maybe there are certain kinds of racial generalizations that shouldn’t count as “racism” because they’re positive.  Racism is clearly something we’ve agreed we should fight against, so if a racial stereotype seems too “nice” to drop in the “bad” category, it doesn’t make sense to brand it as racism.  My response is that stereotypes (racial or otherwise) actually aren’t inherently positive or negative as soon as we dig below the surface qualities of “nice” and “mean”.  For instance, consider the classic stereotype of Asians being good at math.  That may seem harmless for Asian people at first glance. But what happens when people start taking it seriously enough that non-Asians automatically expect the Asian person they just met to excel at math and feel inclined to tease them for being a “bad Asian” on finding out they don’t, or if an Asian person feels bad for not being especially good at math, or if non-Asians start routinely expecting Asians to help them with math problems, or if negative stereotypes that are already attached to “math people” become associated with Asians?  The fact is that all flavors of stereotyping are ultimately harmful in that they involve over-generalizations among certain categories of people and lead to invalid assumptions that are guaranteed to create pain in the long run.

I also see it in a lot of rhetoric against some abstract entity such as Capitalism, The State, or (most often in my circles) The Patriarchy, where everything that initially reads as positive (or positive for a particular group perceived as oppressed, or negative for the oppressor class) is inherently “good” and so can’t possibly be connected to one of those evil institutions.  Not that anyone is literally making “can’t possibly” claims. I’m just saying that when I notice people being reluctant to view some event or situation as a byproduct of one of these societal forces, seemingly because it doesn’t superficially fit the general pattern of oppression as they understand it, I connect their reluctance to this fallacy.

But this can also be an issue when people consider and analyze their own choices, and when suggestions are given to people about themselves.

Everyone wants to do the right things and believe positive assertions about their policies and choices.  When we choose values to live by, we think of those chosen values as “good”; it’s natural to assume that when we perform actions in the name of those values, those actions are also purely “good”.  If something “bad” about them is brought to our attention — some possible downside to our approach, or something detrimental that may indirectly fall out of it — it’s instinctual to either jump to denying it, or to accept it but conclude that we weren’t acting in accordance with that good value after all.  It’s easy to lose sight of the fact that being skilled at achieving many of the personal characteristics that seem positive (e.g. honesty, open-mindedness, generosity, self-reliance, etc.) actually involves a lot of give and take.  Actions done entirely in the name of value X a priori can be characterized only as “in accordance with value X”, not as “inherently good”.

For instance, suppose a teacher makes a point of being super, super clear about everything they explain, outlining the smallest steps of the problems their students are trying to solve.  After all, isn’t clarity a quality to strive for as a teacher?  Well of course, this is probably a net beneficial practice up to a certain point, but one can imagine that it might become net harmful if taken too far.  Someone may comment on that teacher’s style by remarking, “You really make every step of the students’ work very clear for them!”  The teacher would almost certainly react warmly: “Yes, that’s exactly what I was trying to do because I think clarity is important.”  But suppose someone instead commented, “You really spoonfeed your students.”  Now that comment sounds negative, and the teacher’s kneejerk reaction may well be to angrily deny it (“How can you say that? I’m just trying to make sure they understand every step!”).  Or instead, the teacher may react by accepting the criticism as a sign that their approach is entirely wrong (“I thought I was being helpful by trying to be clearer, but I guess that strategy only hurts the students, so I should abandon it”).  Obviously both reactions are misguided: the ideal response would be something more like “I still think it’s important to strive for clarity, but I guess it’s possible to take that too far and I should consider whether that’s what I’m doing right now.”  Still, I think a lot of us have some difficulty arriving at that response.

But notice that the two comments essentially point out the exact same thing, namely the fact that the teacher takes unusual pains to make things super clear; the only real difference between them is connotation!  This is possible because whether the teacher’s policy is good or bad depends on how one looks at it, and there’s more than one way to look at it.  The only really inherent property of the teacher’s behavior that we can judge immediately is that it’s pro-clarity, for the obvious reason that it’s performed by a conscious agent whose aim is to maximize clarity.  But even though we usually think of clarity as a positive thing, we can’t conclude directly from being pro-clarity that the policy is entirely right.  Or to put it another way, the more pro-clarity you are, the more likely it is that your students will be understand what they’re supposed to be doing in your class (a positive thing), but also your students are more likely to get spoonfed to the point of not knowing how to figure things out for themselves (a negative thing)… and that’s just something you’re going to have to accept about pushing yourself in the direction of greater clarity as a teacher.

I have a hard time with finding the vocabulary necessary to express my point clearly, and I realize that the connections I’m trying to make here might seem poorly justified.  Moreover, the type of thinking I’m finding fault with in this post falls under such a basic fallacy that I imagine it’s listed as a red flag somewhere in most rulebooks for rational rhetoric, if only I knew where to find it.  Even so, I thought it would be worthwhile to explain my point of view on the “assuming inherent ‘good’-ness and ‘bad’-ness” issue here, in one place that I can refer to when it comes up later.  I don’t know whether my tendency to insist on writing things down in my own terms (even when they’re likely to be explained in much better terms somewhere else) can be classified straight-up as a positive or negative quality, but for the moment I’m not going to bother trying to suppress it.


My guide to assessing agency

[Content note: mostly personal musing I wanted to get out of my system before finally turning the page on this general topic.  Perhaps anticlimactic, but I hope this succeeds in tying some things together.]

My last three posts here have focused on the grand metaphysical debate over free will.  Well no, not really.  For the most part, my primary concern hasn’t been in directly tackling the question of the existence of free will, but in treating the practical consequences of interacting with people on the assumption that a certain degree of agency lies behind their choice-making.  And that’s not mentioning other essays I’ve put up here (also about degrees of agency, or which agents get moral responsibility, etc.) which generally relate to this.

As far as I’m concerned, it’s perfectly justified for me to keep harping on this one general area.  To me, the question of how to evaluate choices and in what way we could do better (or worse) necessarily needs to take front and center stage if we seek to arrive at normative truths (i.e. answers about the morally optimal paths), and that is in large part what I’m interested in nowadays.  However, I feel that before I can put the topic to bed (for now; I make no promises of not bringing it up again!), one final aspect of it still deserves discussion.

You may have noticed in all the thousands of words of my last three blog posts, I managed to almost completely avoid the question of my own personal relationship to the contrasting approaches of assuming high agency (libertarian free will) or low agency (deterministic mechanisms behind choices), whether I usually prefer the high-agency (free-will-ist) goggles or the low-agency (determinist) goggles.  Well except of course that I’ve denounced the practice of always relying on either pair of the goggles, but I haven’t really gone beyond that into whether one side is preferable (either to myself personally or for everyone to aim for) or if there is some practical approach to follow for evaluating the level of agency of someone’s decision, etc.  That is what I want to write about today.  Although I’m afraid I’m not going to arrive at any really solid or satisfying answer in this essay, I feel that this sequence of posts would still be lacking something if I didn’t at least try to talk about it.

Let me start with my initial naïve solution, which goes back to the preferred framework I outlined here, where our evaluation of agency is for all means and purposes equivalent to our evaluation of how we should react to someone’s decision.  The “algorithm” is stated in very rough terms as follows.

The degree of agency behind a given decision is proportional to the likelihood of such a decision being altered by treating it as coming from a position of high agency.

Or in other words, someone contemplating doing X should be assumed to hold a high level of agency if and only if telling them “X is bad, mkay?” (perhaps along with supporting arguments for X being bad) is likely to change their decision.

Note that by proposing this “answer”, I’ve added exactly nothing to the discussion, beyond what I said in the above-linked post when I characterized free-ness of choices that way in the first place.  How do we know we’re reliable at making assessments of the dependent variable [likelihood of decision being altered by reacting as though it comes from a given amount of agency]?  What should we expect the overall distribution of results to look like?  Should we expect to find that choices are fairly high-agency in general, or fairly low-agency?  I have failed to provide answers to any of this.

The above attempted solution goes hand-in-hand with another general and even more useless assertion which is almost its corollary: there is a significant range of degrees of agency behind human actions, and it takes objective intellectual honesty to assess them properly.  In other words, be careful not to assume one extreme or the other.  This pretty much follows tautologically from my recent rants about the potential traps coming from high-agency goggles and low-agency goggles, or even just from my much more general spiel about keeping one’s mind open to “inconvenient” possibilities.  Take off your favorite pair of goggles once in a while (or make sure to keep switching them), open your mind’s floodgates to discomforting gadflies at least a crack, etc.  Same old stuff.

None of this answers the real question of which pair of goggles yields a map of the world that is actually closer to the territory, or whether I personally am more of a high-agency goggler or a low-agency goggler.  That is for me a tricky one, and I’m obviously still doing plenty of self-reflecting over it, but I’ll attempt a mildly stream-of-consciousness account of my personal approach below.  Spoiler alert: I still won’t come to any conclusion which gives a concrete answer going beyond the tautological one I stated just now, but I hope maybe I’ll arrive at some small insight along the way.

I reached the level of maturity where I began to form my own full-blown independent belief system around high school age, which I think is fairly typical.  Starting with that time, I can roughly break up my life into three major segments excluding the one I’m in now (I try to avoid extensively analyzing periods of my life until they’re well over): high school, college, and graduate school.  Although I wouldn’t characterize these stages precisely in terms of my views on free will or which pair of goggles I was wearing at the time, this partition provides a useful frame of reference for describing my personal journey.

I started out leaning very strongly in the direction of low-agency.  Determinism was simply the correct way to model the universe.  It followed directly from naturalism and the scientific method, which ran counter to irrational things like religion and was the mode of rationality itself as far as I was concerned.  And viewing people’s behavior in terms of their circumstances was not the most reasonable view, but the most moral one — it was the very definition of compassion.  The more fortunate among us should always extend a hand to the less fortunate; it was as simple as that.  If I ran across someone who was evidently facing some struggle or challenge, I was inclined to think it was my obligation to help them or at least to show them mercy for whatever they might do to me, provided it stemmed directly from the battles they had to fight.  I was empathetic (or maybe it’s better to say, sympathetic) to an extreme, always trying to see other people’s hostile positions from their point of view, always playing devil’s advocate when I saw someone else lionized or demonized.  At times it drove some other people (e.g. my parents) crazy.

That was me by the time I got into college.  I’m massively over-simplifying things here, of course, and when it came to some issues, because of my relative immaturity and ignorance I was far more judgmental of others and unaware of my own privilege than even a few years later.  But I think it would be safe to call me a low-agency goggler at that time.

I consider my college years a period of profound personal change for me that is unrivaled by any other part of my life, and one of them can definitely be expressed in terms of this low-agency/high-agency thing.  I entered university as an unabashed champion of the determinism-leaning attitude and the flavor of sympathy and compassion that went with it, but left the undergraduate phase of my university life with a dramatically (though not entirely) different perspective.

The change came about as a result of a series of social experiences I had during my first years living away from my family.  Without going into any details, I’ll just say that I found myself at close quarters with some people who not only steadfastly clung to low-agency goggles themselves, but were struggling to get by under circumstances more difficult than my own.  Their low-agency-gogglesism didn’t seem to be providing them a net advantage; rather, it was further entrenching them in their ruts.  Even worse, this ideology, after a few tempting modifications that I described in my anti-determinist-goggles essay, seemed to provide these people with excuses to attack those nearest them who appeared to be more fortunate.  This led to some destructive fighting between themselves and certainly plenty of nastiness that went my way, as I was clearly perceived as more fortunate than they were (I would claim that there was indeed a disparity but that their view of it was exaggerated and distorted by the low-agency lenses they gazed through).  Moreover, my own tendency to wear low-agency goggles wasn’t doing me any favors and ultimately wasn’t even doing them any favors.  I came to recognize that my instinct to act as selflessly as possible to those near me who were struggling may have helped them in the short term, but it opened me up to manipulation and sometimes full-on bullying, and eventually some of them dragged me down further than I was able to pull them up.  By the end of this period, when I was switching schools to pursue my graduate degree, I had realized that I seriously needed to rethink this part of my creed.

My general belief system mostly hasn’t changed any further.  Starting from that point and continuing through the present day, I have kept up a conscious effort to be very guarded against indulging those people who both appear to be wearing the low-agency goggles and are likely to see me as just “free” enough to use to their advantage.  I have been mostly successful at this, although most of my closest friends have certainly been the type whose worldview leans in the determinist direction.  However, my social experience in graduate school put me close to a few people of the opposite mould to this, both in the sense of being high-agency leaning and of having more favorable circumstances to work with.  At first I remember just how refreshing it was to actually feel envy for another person again, never mind how destructive I knew that emotion could be.  But eventually I was reminded firsthand how it felt to be oppressed from the other side — by the high-agency gogglers, the ones who felt threatened on multiple levels by my struggles and ineptitudes and told themselves it was for my own good as well as theirs that they chose to be harsh with me.  A lot of the fodder for my anti-free-will-goggles essay was a product of knowing several such individuals closely enough to be able to see (at least I believe) parts of their reasoning process.

I’m sure it goes without saying that there has been a very obvious correlation between the whole high-agency/low-agency split and the backgrounds and circumstances of different people.  In general, those who have seemed better better off than I am have treated me according to what they saw through high-agency goggles, and those who have seemed worse off through low-agency goggles.  I see this as another example of how each of our worldviews are influenced (even sometimes by self-modification) by the situations we’ve found ourselves in.  And probably I too have slipped into some of the failure modes of each side depending on how high on the ladder I sit relative to the people I’m dealing with.  But because of everything I’ve seen, I’ve made a very deliberate point of being as self-aware as possible about these failure modes, which is what in large part led to my writing extensively about them.

But I’ve drifted a bit off topic.  What does all of this mean for the question about levels of agency?  Okay, so I started out as a low-agency goggles-wearing adolescent, then took a sharp turn in the dawn of my adulthood and wound up trying to stick to a straight and narrow middle road or whatever.  My original intention was to say in this essay that at my core I’m obviously still on the low-agency side and the only difference is nowadays I go out of my way to qualify it.  In some ways this interpretation seems to check out: for instance, I am passionate about certain generally liberal positions (such as prioritizing deterrence and rehabilitation over harsh punishment) while pretty much my only nods to traditional conservatism seem to be careful qualifications to liberal positions (maybe the welfare system could be abused or lead to perpetuation of poverty).  At the same time, one of my most major concerns currently is with what appear to me as excesses in the low-agency mindset among younger generations of liberals, and I would say that this critical attitude has gone beyond “careful qualification” level.  So the landscape is starting to look a little confusing and my place in it is hard to characterize as being by default on one side or another.

At the same time, there is nothing qualified about my commitment to empathy, which I wrote about a year ago and continue to stand by, perhaps with fewer reservations than I hold for anything else I’ve written here.  And isn’t empathy, almost by definition, a way of understanding, by viewing conscious behavior in terms of its underlying mechanisms?  (It can practically be seen as the human side to my pro-science, anti-supernaturalism beliefs which I also feel very close to my core.)  It’s hard to shake off the intuition that to be empathetic towards other people is to cling steadfastly to a low-agency model.  This seems to contradict the “middle of the road” approach.

Except I can now resolve the apparent inconsistency, I think, having realized that the “empathy = low-agency” idea is subtly wrong.  And really, I should have reread my own essay on “A Principle of Empathy”, and I would have known that!

A major part of my whole thesis when I made that post a year ago was that empathy is not the same as sympathy or charity; it does not imply that we excuse someone’s behavior just because we understand what causes it.  Understanding means in particular knowing just how high someone’s agency is — maybe it turns out to be quite high, and we can determine that treating it as quite high will have the desirable effect on them; maybe applying empathy allows us to conclude in this case that someone is worth of praise or condemnation for what they do.  My “Principle of Empathy” is not low-agency-goggles-ism in disguise; if anything, it’s objective-rigor-in-determining-how-high-agency-our-model-should-be-ism in disguise!

In the last few days another framing of this occurred to me.  Someone commented on last weekend’s post by suggesting (among other things) that compatibilism, the philosophical position on the free will debate which I endorse, is really just hard determinism in essentials, the only real difference being an altered definition of “free” which allows for a conclusion that more people will find palatable.  And this is true.  Compatibilism doesn’t really lie at an equal distance between hard determinism and metaphysical libertarianism.  It agrees with hard determinism that the concept of “freedom” that the metaphysical libertarians are chasing after cannot exist.  But compatibilism, from my point of view, is essentially determinism plus a concept of freedom that does have the useful feature of allowing one to talk about different degrees of agency, moral responsibility, etc.  (Weird random thought that occurs to me while writing: this Determinism Plus can almost work as an analog to the Atheism Plus movement that was attempted a few years ago.  Hmm.)

It is easy to associate the empathetic worldview with the deterministic one, because both emphasize prediction of events by causation.  And this association is perfectly valid.  But let’s not overlook the fact that siding with determinism doesn’t necessitate giving up on a model that allows for agency.  If “determinist goggles” were really only about determinism, then I’d happily confess to having stubbornly clung to them pretty much all my life.  But my conception of that metaphor is meant to imply an aversion to recognizing genuine choice and responsibility, which is a very different matter.

In the end, even after all this rambling and meandering, I can’t propose anything beyond the “caution against being blind to side or the other” idea, except for a positive assertion that there is no concrete formula beyond “caution against being blinded by one side or the other”.  Actually, I think maybe this is a general truth about any conundrum for which there are multiple narratives that each have (at least partial) validity: the only meta-level a priori rule that can safely be applied is to avoid becoming enamored with one of them while being blind to the rest.

We are not gods.  We have no way of being able to peer into the inner workings of other sentient brains to see what is driving the decisions made in them, and there’s only so far we can go towards standing outside ourselves and understanding the inner workings of our own brains.  We just have to put our best effort towards being objective, honest, and open-minded in our quest to understand these things.

I can only think of one really concrete technique that can be used towards ascertaining the truth about someone’s agency, which is best employed between two people who trust each other to be operating in good faith — we’ll call them Alex and Beth.  In order to figure out how much control Beth has over something she does, they make the following agreement: Beth will, as objectively as possible, do a little soul-searching to get the most honest possible idea of how much agency she has.  And Alex, in return, will trust and accept whatever answer Beth returns with.  (I know I’m not the first to come up with such a scheme, because I clearly remember seeing it suggested on the rationalist internet somewhere, but I can’t remember to whose credit.  I had certainly already been considering something similar at the time, though.)

Apart from that, I can only lay out a few platitudes (i.e. pleasant-sounding but valid suggestions that follow pretty much tautologically from what has already been said).

We all feel compelled to do certain things, some of which appear from others’ points of view to be very free choices, and yet we each know we have something that at least feels like free will.  Our consciousness and self-awareness comes with impressions of control and responsibility.  In order to determine how to live ethically and which choice is the Right Thing To Do, we have to consider what kind and degree of control and responsibility lie at the base of actions.

But there is no completely reliable, cut-and-dry way to do this.  We are each stuck inside our own brains.  So the very best we can do is to approach such questions in the spirit of intellectual honesty and objectivity, even knowing that pure objectivity is impossible.  We have to examine not only other peoples’ minds but our own, remembering that even those decisions which seem entirely conscious and free are heavily influenced by outside factors, and that in many of even the most uncontrolled actions, at least a little power over them can be found beneath the surface.

We should especially keep in mind that when each of us is dealing with fellow humans who look like they’re managing worse than we are, we are more biased towards assuming high agency.  Conversely, when each of us is dealing with fellow humans who look like they have it easier than we do, we are more biased towards assuming low agency.

Above all, we are struggling to wade through this together.  Let’s try to be understanding of ourselves and towards each other.  And as long as it’s warranted by what we’ve uncovered in our quest for understanding, let’s be kind to one another, not forgetting that it is no less important to be kind to ourselves as well.

A word on politics, free will, and God

[Content note: the topic of theism as it relates to free will is something I’m deeply interested in but on which I retired from debating long ago, and I’d much appreciate hearing from someone more knowledgeable.  Much discussion of conservative religion and abortion.]

I. Free will and the political spectrum, revisited

While I’m still on the subject of the whole social conflict between free-will and deterministic explanations for things, I now want to directly readdress one of my earliest posts, where I proposed (in the final section) that the two major sides of the American political spectrum are aligned with the determinst and free-will-ist mindsets.

My main thesis there was that the Left and the Right are generally aligned respectively according to the determinist-leaning and free-will-leaning perspectives, with regard to how they treat each issue.  My experience with holding and voicing this opinion has been interesting.  On the one hand, I feel more certainty and a stronger sense of having properly defined my position in this case than with a lot of the other theses I’ve tried to defend on this blog.  On the other hand, I get the impression that most of the other things I write here are fairly non-controversial — even mundane — once those engaging with me adopt the language I’m using, whereas this idea about the political spectrum has generally been received with dissent when engaged with at all.  I’m evidently in the minority here.  Over years of running into various thinkers’ attempts at characterizing what determines Right versus Left in the political arena, I’ve seen many creative ideas (“progressivism vs. conservatism” would seem to be the most obvious model, but much more unusual ones have been proposed) but none seem that close to my “determinism vs. free will” characterization.

I wrote that other post in early 2016, and little did I know then exactly how dramatically the political situation in America (and even the entire West) would evolve during the coming months.  I do still firmly hold to the model I proposed then, with a couple of caveats.  First of all, I do not consider the recent takeover of the Republican party following the presidential election to represent the Right in America.  Maybe it’s a bit disingenuous of me to say this, given that this may well become the “new Right” or something (I’m hoping not), and besides, I remember people saying back the last time the Republican party was in control of everything in the 00’s that the Republicans had abandoned True Conservatism, What Happened to the Party of Reagan, and so on.  But this new administration seems to espouse some sort of monstrous distortion of the right-wing views I grew up seeing so that they are now semi-unrecognizable.  I would not characterize this platform as being somehow correlated with a libertarian-free-will model of the world; instead, I’m tempted to say that it combines the worst of both sides of the whole free-will vs. determinism battle.  But for now, I’m not interested in discussing it any further.  It’s probably not a good idea to claim anything too confidently until the situation has further stabilized anyway.

Secondly, I think I need to clarify (and maybe subtly modify) what I argued before.  I wouldn’t want to say so much that the individual stances themselves taken by the Left and Right necessarily fall on the determinist or free-will-ist side.  Rather, I claim that the rhetoric most frequently used by the Left and Right in defending them is of a pro-determinism and a pro-free-will flavor respectively (again, at least in America — despite having lived abroad for a little while I haven’t witnessed much discourse on politics in other countries).  There are certain positions that lend themselves more easily to one flavor of rhetoric than the other, and the liberal and conservative positions often do seem to be aligned in that way, but sometimes it’s hard to defend the claim that a political policy stance, stripped of supporting arguments, has a particular flavor.

For instance, I’m not sure that the pro-gun-control position (traditionally held by those further on the Left) on its own really looks that determinism-oriented compared to the pro-gun mentality.  After all, one could argue in favor of allowing freer access to guns by painting a picture of most pro-gun citizens as innocent people who feel weak and vulnerable under the threat of bad guys with guns and deserve the change to defend themselves with maximally lethal weapons; otherwise, there’s nothing they can do to feel safe.  This would sound pretty determinist-goggles flavored: personal weakness in the face of oppressors which can’t be remedied without outside help.  And some on the pro-gun side are talking this way.  But the dominant pro-gun arguments I keep hearing smack of “guns don’t kill people; people do” and “it’s our Second-Amendment right, full stop” and “we shouldn’t be punished just because some people abuse their rights; the only solution is to crack down on those who choose to do evil” with an implied underlying attitude of “remember back when men were men and Americans were encouraged to be tough and self-sufficient instead of whining pansies?”  Meanwhile, the anti-gun folks are going on about feeling vulnerable to the threat of deranged people with guns at the expense of focusing the personal responsibilities of the shooters.  The two main sides of the gun control debate may not inherently be aligned with assumed degrees of agency, but the main arguing voices do seem to be trying to force such an alignment.

All that said, there is one element of my thesis which I felt was a bit weak and which I’ve always intended to expand on since first writing about this over a year and a half ago.  It involves those political positions which are generally aligned with religiosity.  Unfortunately, I feel long past my heyday when it comes to debating religion and in particular how theism relates to metaphysical beliefs regarding the free will problem; I probably would have done this better in college.  Still, I’d like to take a crack at addressing exactly these philosophical questions as they relate to politics.

II. My problem with evil

Ever since the 1980’s or so, the Right Wing in America has generally been a haven for those who ascribe very strongly to conservative forms of organized religion (well, at least conservative Christianity).  Accordingly, the Right has added a number of issues to their overall platform which represent the religious beliefs of many of its constituents, such as promoting prayer in schools, restricting access to abortion and birth control, keeping marriage “between one man and one woman”, and opposition to stem cell research and euthanasia.  Near the bottom of that old post about free will and politics, I listed a couple of these issues with a brief explanation of why the liberal side corresponds to determinism and the conservative side corresponds to free-will-ism.  Of course, my writing carried the implication that other issues I didn’t specifically address in that list could be interpreted under a similar framework.

I could already see some problems with this at the time.  First of all, even when the flavors of rhetoric rather than the platform points themselves are considered, the correlation with the high-agency/low-agency split looks to me somewhat weaker on first glance than with the other things on the list.  Secondly, and more crucially, drawing parallels between the motives behind these positions and assumptions regarding the free will question seems to be reading an awful lot into things when a much more obvious mechanism is handy: God.  Okay, by “God” I mean belief in the God of organized Christianity and many of the concepts (of souls, sanctity of life, etc.) that come with it.  That very obviously ties into theism vs. atheism, which is a schism between metaphysical belief systems that rivals the whole free will controversy.  But here, most people do recognize and agree on the fact that some areas of the American political landscape, especially pertaining to social issues, correspond to different philosophical (in this case, theological) belief systems.  So why don’t I just stick with that model instead of trying to cram it into another one?

Well, my short (but inadequate) answer is that I do agree with that model, but that doesn’t mean I can’t also fit it into a broader model that more generally answers the question of how individuals cluster along opposing political ideologies.  My more thorough answer requires a consideration of how traditional theism relates to the question of free will.

One of the main challenges that theists are obliged to answer is the Problem of Evil.  The classic version of the Problem of Evil can be summarized as the following question: How there can possibly exist so much evil in a universe that’s being run by a deity who is both omnipotent and omnibenevolent?  In particular, such a deity should have the power to stop us humans from doing horrendous things and should have a strong desire for those horrendous things not to be done, so why doesn’t He (or She, or They) stop us?

Throughout the past couple of millenia, numerous defenses of theism against this criticism have been proposed by religious apologetics.  Before going any further, I might as well state for the record that I don’t find any of the arguments that I’ve ever heard in this vein to be fully convincing, and therefore, the traditional concept of a theistic God doesn’t make much sense to me.  It seems that there are two main approaches one can take for these arguments: assuming determinism and adopting some strange notion of morality compatible with it; and assuming non-deterministic free will and adopting some strange notion of omnipotence compatible with it.  Both, in my opinion, suffer from the same confusion over just what it means for an act to be “free” that we see everywhere in the free will debate.

A. The deterministic rebuttal

The first approach, which assumes some kind of determinism at the outset, has been taken up by some fairly conservative branches of Christianity going back hundreds of years.  I believe that Calvinism is an example, but I’m afraid my knowledge of the relevant theological history ends pretty much with that.  Anyway, in such a model, God’s omniscience is emphasized — He can certainly see everything that we will do in our lives, including whether or not we will become “saved”.  And what’s more, this determinism has a sort of incompatibilist streak to it: each of us is “predestined” to be saved or to be damned; of course this in turn determines whether we are subject to eternal bliss or to eternal torture after we die.  I guess this is an answer to the Problem of Evil because… even as machines, we still should do the right thing and are somehow still “deserving” of punishment if we fail?

I knew someone who was influenced during her childhood by this form of organized religion.  She started out as a theist but by adulthood had become a staunch atheist.  As she explained it to me, this was largely because she eventually came to feel deeply disturbed about religion, at least the kind of religion she had been exposed to.  She felt that she had free will, and didn’t particularly appreciate her actions being predestined and already known by a non-interfering deity.  By the time I knew her, her conception of theism itself was as the basis of a belief system that denied free will to us lowly humans, and that eventually became simply irreconcilable with the reality she experienced.

For my part, I was largely unfamiliar with this variety of religious ideology until she described it to me, and anyway my own compatibilist view allow for a sort of predestination that doesn’t necessarily preclude free will, but I still see obvious problems in it.  If we are all essentially deterministic robots, then why would God program us in such a way that so many of us are doomed to fail at the main objective in our earthy life, which is evidently to be saved?  And how is it fair to damn us to Hell when we pre-programmed robots do fail?  Something doesn’t feel quite morally right about the deity in charge of this rather fatalistic-looking form of existence.

B. The free will rebuttal

The second approach is one which I’m far more familiar with.  It answers the Problem of Evil directly by saying that in order that God might have a more meaningful relationship with His creation, He had to assemble us with a little feature we like to call “free will” (after all, what worth is there in our existence if we are merely robots?).  An unfortunate consequence of endowing us with free will is that we often make the wrong choices.  Or at least, in fundamentalist Judaism and Christianity, the first two humans made a gravely immoral decision in following a talking snake’s suggestion over God’s orders.  The result is not only the “Fall of Man” (which I suppose is usually meant in the sense that from then on our species has been evil-by-default and not deserving of Heaven), which is the root cause of not only all the horrible acts we have been committing since, but also the presence of what we may call “natural evils” (earthquakes, floods, disease, and so forth).  In other words, we, and not our all-knowing, infinitely righteous creator, carry moral responsibility for literally everything bad in the world.

The issues I take with this version of Judeo-Christian theism mainly revolve around what I view as an incoherent notion of free will itself.  I simply cannot comprehend the concept of freedom which is “free” and implies “responsibility” in any meaningful sense that can’t be described in terms of deterministic mechanisms.  Any kind of “free choice” must somehow fall into the category of chemical event that happens in the brain, and therefore the rightness or wrongness of any decision is determined (at least in large part) by the character of the agent (as well as other events and circumstances).  The free-will-ist God programmed us with certain characters, so surely He holds some responsibility over what we are led to do.  Maybe I can allow that God wanted to create some lifeform with the capacity to make a genuine choice to follow Him, but since such the phenomenon of “choice” is still a chemical event, that doesn’t explain why he didn’t arrange our chemistry a little differently so that we would choose to do at least somewhat fewer awful things.  The model just doesn’t hold up.

When all is said and done, my objections to both rebuttals against the atheist “argument from evil” comes down to the same sort of argument in favor of compatibilism.  Perhaps each of the forms of religious apology outlined above could be considered to be the same rough idea viewed through each of the opposing pairs of goggles I discussed at length in my previous two posts.  Then the answer to both is “Take off your blinders for a moment and allow that most decisions are something in-between what may be considered totally determined and totally free.”

III. Christian, conservative, and free-will-ist, in that order

So why do I contend that religious motives, in the context of politics, are aligned with free-will-ism rather than determinism, when the traditional theistic position (with its all-knowing God and morally responsible humans) seems to be bending over backwards to embrace each?  Because, as I implied in my discussion above, one of the aforementioned answers to the Problem of Evil seems a whole lot more prevalent in the present-day West than the other.

I wasn’t brought up religious, but I had a lot of exposure to religion in the area where I grew up.  As I was becoming an adult, I payed close attention to the cultural war between the secular Left and the “moral values” Right (which was quite powerful in America at the time), and during college I had a habit of debating religion with random people in the free-speech zones of my university.  Throughout all of those experiences involving discussion of religion with religious people, I almost never heard about the “predestination” point of view apart from the one theist-turned-atheist that I described earlier.  Instead, a good 95 percent of the time, I was being exposed to the “God gave us free will” argument, and this seemed to be faithfully reflected in the politics of the religious conservatives that I encountered.  And although the Religious Right is considerably weaker in America today and I’ve had less and less interaction with Religious Rightists as the years have gone by, I see no evidence that their rhetoric has substantially changed.

The very idea of “moral values”, as it pertains to legislative policy, is that God expects us to follow the righteous path; we are each responsible for our own actions; and our laws must reflect the fact that sinful behavior deserves punishment.  Those amoral liberals want to follow a hedonistic path of following their baser impulses and just doing what “feels right”.  By their logic, one can do whatever one wants, and life has no real purpose.  The theory of evolution (which liberals promote at the expense of God’s word) is dangerous in large part because it implies that we are purposeless robots whose actions can’t be assigned moral value.  Sexual behavior is a choice, and things like natural urges (temptation) and sexual orientation (which according to many is a choice as well) are no excuse for going against God’s wishes.  Miscarriage is a tragedy but all part of God’s plan; abortion, however, is a thing you can control and is a sin regardless of circumstances.

The “sanctity of life” thing, which accounts for a lot of specific religious conservative positions, seems at first like a defense of the helpless (which I have consistently coded as pro-determinism in mindset).  After all, pro-life activists are saying all the time that they’re just trying to speak up for those who have no voice.  But peeling back the outer layer of this rhetoric will reveal that the crux of why fetuses (and stem cells) are so deserving of protection boils down to two things.  One, they have something called a “soul”, which came into independent existence the instant the sperm met the egg.  Apparently in traditional Western religion, humans have souls while animals (whose rights are defended mostly by activists on the Left) do not, and the implication appears to be that a “soul”, by definition, is whatever gives us the ability to act freely and make moral decisions.  And two, fetuses and stem cells (unlike death row inmates, whom liberals defend) are innocent in the very absolute sense that they quite literally haven’t done anything.  Claiming their innocence is not a matter of recognizing that circumstances have influenced them to do some questionable things, as liberals make a priority of doing.  There are no questionable choices made by fetuses and stem cells to explain through determinism, for the simple reason that they haven’t had the chance to make any choices at all.  Secular Leftists, meanwhile, are much more concerned on behalf of women who are forced to bear unwanted children, because their focus is on the unfortunate circumstances that typically bring such women to that position.

On a deeper and less concrete level (and I think I’ve at least hinted at this before), in the realm of philosophy of science, I’ve always seen the classic atheistic model of the universe as itself more determinism-oriented than the theistic model.  For atheists, the main way to discover explanations for phenomena is via science, which involves describing everything in terms of natural laws (which are generally deterministic, and no I don’t want to address the implications of quantum mechanics right now).  The theistic approach traditionally involves invoking a God of the Gaps, which as far as I’m concerned is rather analogous to an “Agency of the Gaps” that incompatibilists invoke when defending their notion of free will.  Note that conservative theism in the politics sphere has a history of trying to modify our conception of science so that supernatural explanations rather than only natural ones are allowed.

So as far as I’m concerned, debates between Right and Left over issues where the “sanctity of life” is invoked, as well as the whole general religion/secularism-in-politics, fit quite naturally into the framework of the free-will-ist vs. determinist mentalities that I keep harping on.  I admit that the connection here may be a little weaker than it is for most other areas in the modern political scene, but I hold that at least these social debates relating directly to religion do not provide a counterexample to my general thesis.  And while I don’t know that the political movement from the 80’s to bring evangelical Americans over to the conservative side of the spectrum was consciously devised with all of these ideas in mind, maybe my argument helps it to make a little more sense that the Republican party was so successful in accomplishing this.

Failure modes of determinist goggles

[Content note: more free will / responsibility stuff, because apparently I never get sick of it.  Contains some qualifications on points I made last time which hopefully won’t be taken as defenses of what I criticized before.]

Now it’s the other side’s turn.

My last post was a diatribe of sorts against the societal values that I’m afraid free-will-leaning outlook on life will lead to (see “Political ideology and perception of free will” for an explanation of what I mean by “free-will-leaning” and “determinism-leaning”).  But I don’t consider an unchecked determinism-leaning outlook to be a good alternative.  In my opinion, it leads to a situation which might be equally dangerous in its own way, and which is the focus of today’s post.

I. What do determinist goggles do, exactly?

Parallel to their counterpart discussed in the last post, “determinist goggles”* are a metaphor I made up to describe a certain way of viewing the world — this time, involving a tendency to see conscious actions as determined by external factors.  While a human who makes a decision may still be nominally considered as somehow a responsible actor by someone wearing determinist goggles, all credit or blame will tend to be focused away from them and towards circumstances outside of their control.

According to the determinist-goggler, human society is composed of individuals whose capacities for accomplishing things are in very large part dictated by their genetic conditions, their upbringing, temporary conditions that they’re subject to (e.g. sickness, treatment by others in their lives), and large-scale societal forces which buffet them to and fro as they struggle along the winding road of life.  Freedom on some level exists but mainly belongs to those for whom such factors are less unfairly restrictive; maybe they should be held at least somewhat accountable for their occasionally destructive choices.  The rest of the population should be treated with sympathy for having a difficult road to follow; their mistakes can mostly be explained in terms of unfortunate factors they have no choice over, and those who do manage to triumph in spite of their circumstances should be applauded.  However, most successes and failures are brought about by good and bad circumstances respectively, and neither the winners nor the losers should receive judgment that they don’t really earn.

In the determinist-goggles-colored world, most of us have a rather hard time succeeding at various things through no fault of our own; yet, we are constantly getting blamed for our failures by others who refuse to see everything that’s tying us down.  The refusal of others to recognize the struggles we have had to go through is easily explained by the fact that they are more fortunate and have probably never had to face those particular struggles (either that or they have experienced our struggles first-hand but are too self-centered to be able to empathize when they apply to other people).  How convenient for them, what with their advantage of never having to actually understand anything about our handicaps, that they can so easily close dismiss our difficulties in order to give themselves credit for their relative success.

There’s one observation I want to get out of the way first.  It is surely the case that across cultures and throughout history, a number of ideologies or key components of ideologies have been inspired by the determinism-leaning mentality; Marxism comes to mind, for instance.  During my lifetime, a newer kind of determinist-goggles-ism has appeared to strengthen considerably and has materialized into an ideology which focuses on combating social inequalities brought about by privileges on various axes, nowadays often referred to as “Social Justice”.  This is nothing more or less than an independently living and breathing manifestation of the view from determinist goggles which has been fleshed out into a full-blown social and political belief system.

However, today I don’t intend to hone in on directly discussing the flaws in today’s Social Justice movement and in how its objectives are argued.  That is a target that’s already getting beaten to death in this part of the internet in particular and seems to be part of a wider culture war online (and the real world) in general.  I’d prefer to hold my focus on the purely determinist-goggles-colored view and the difficulties that arise from it, rather than getting bogged down in commenting on some vast body of cultural rhetoric which is being used to treat a wide array of concrete social ills.  I can’t promise that I’ve entirely succeeded at this, but I have tried to address the determinist-flavored mentality at the individual level as I did for its opposite, and I’ll leave it to the reader to connect the dots between the arguments I make and their applications to the current culture-wide discourse.

*It has been suggested to me that I replace the words “libertarian” and “determinist” in my “goggles” terminology with the expressions “high-agency” and “low-agency”.  This strikes me as most likely an improvement, but just for continuity’s sake, I’d rather make this post more consistent with the last one and save the change for next time.

II. Limitations on respect for limitations

As with the polar opposite pair of goggles which I treated in the last post, someone who puts their faith in determinist goggles will suffer from considering only one point of view while ignoring any and all hints of the other.  This is again the most direct and obvious (even tautological) fault to find with wearing one pair goggles of any type and never taking them off.  The committed determinist-goggler will fail to allow the possibility that they or the person they are sympathizing with can possibly change how they act, react, or feel about things that are happening to them.  This is problematic already, but it lends itself almost inevitably to further fallacies.

To understand the trap that one is vulnerable to falling into, I think we have to go back to the philosophical definition of determinism, even as I continue to stress that in practice one’s choice of goggles probably isn’t all that correlated to one’s choice of metaphysical beliefs. The view from the determinist goggles is still rooted in abstract determinism.  I’m going to say we might as well assume hard determinism rather than soft determinism here, because we’re talking about a model which renders virtually null the power to make free choices in any meaningful sense.

What is hard determinism?  When you get down to it, it’s the belief that everything, in particular every human action, is completely determined by prior events and therefore cannot happen differently (for some intuitive and meaningful definition of “cannot”).  So nobody — neither you and your friends, nor the antagonists in your narrative — can actually help what they’re doing.

The main criticism of this comes from the apparent implication that nobody can be held morally accountable for anything.  This is a problem because it seems to contradict the very foundations of most systems of ethics, but also because in practicality it utterly and profoundly goes against the way we human beings actually process what is happening to us.  When we like something we see in the world, we naturally want to praise those whose actions have put it that way; when we are upset with something we see in the world, we instead want to yell at those people and hold them accountable for fixing the problem (“those people” may occasionally includes ourselves, when we recognize our potential to change the world for the better).  Without being able to do this, we are stuck in a rather nihilistic reality where we essentially have no real purpose, lacking the capacity to directly enact change or to shame someone else into enacting change.  Nobody actually lives this way, regardless of the conclusions they may arrive at by doing philosophy.

Now go back to the determinist goggles, which don’t exactly make one a hard determinist but have an effect which is roughly similar.  In principle, the goggles should cause the wearer to go easy not only on themselves for their failures but also on their oppressors for their failures.  After all, everyone (not just ourselves or the people we’re sympathizing with at the moment) acts according to what circumstances dictate, right?

But in practice, nobody knows how to function according to this assumption.  So instead, determinist-gogglers tend to follow a slightly adjusted premise that some circumstances (conveniently, the ones they and their friends find themselves under) dictate negative behaviors while other circumstances (ones which apply to their adversaries), well, they’re not really a valid excuse for anything.

One can see this play out in local settings where attempts to enforce a determinist-goggles-ism which applies equally to all parties leads to an unsustainable system of social rules.  I’ve certainly witnessed such strife firsthand.  One person in a social unit explains that they can’t stand Behavior Y, that unfortunately they have an “emotional need” for Y not to be done in their general direction, so it’s imperative to adhere to a rule of not doing Y.  That’s all fine and good until someone else who interacts with them expresses their own “emotional need” to do Y, stating that it’s impossible for them to cope without doing Y.  Maybe everything can be worked out serenely within a group where at most one person wears determinist goggles.  But I imagine most groups probably have more than one determinist-goggler in them, and then it quickly becomes infeasible to come up with social contracts agreeable to everyone without first fighting a war over whose “needs” are actually valid.

I’d like to point out another scenario in which this conundrum shows up, perhaps slightly in disguise.  Determinist-goggles-ism tends to imply that one should always lend a helping hand to the less fortunate, because their misfortune is probably not their own fault.  Yet curiously, I’ve noticed that a lot of my acquaintances, despite generally appearing to have determinist goggles on, in certain contexts manage to avoid doing this.  For instance, I almost never witness anyone give money to homeless people on the street, not even the least judgmental and most empathetic people I know.  (Lest I sound holier-than-thou here, I admit to walking past beggars on a daily basis and rarely stopping to give them my spare change; I do feel bad about this despite the excuses I come up with.)  As to how they justify this, the most likely explanation seems to me that they assure themselves that their own difficulties (specifically financial ones) effectively prevent them from “being able to” lend a hand to people in a blatantly more desperate situation, or at least that one should look towards others of greater means to perform such acts of generosity.  And I’m not claiming this rationalization is entirely wrong.  But it looks to me like a lot of determinist-gogglers have succeeded in maneuvering towards a model in which their difficulties somehow kinda-sorta trump others’ (obviously worse) difficulties.

In fact, this even applies to the personal story I told last time about choosing to take a particular city bus without a ticket during one year, which I used as an example of something morally questionable that I tried to justify to myself through libertarian-free-will-ist thinking.  After I wrote about that, I got to remembering that I also had a second, quite different, rationale for this cheating behavior.  Recent months had not been particularly kind to me, and I felt that I’d been having a rough time.  I was, for the first time in my life, trying to establish myself in a new country, and it hadn’t exactly been smooth sailing.  Not only had I found myself majorly inconvenienced by countless hours of wasted time trying to navigate an unfamiliar system in a new language; I had recently wound up paying hundreds of Euros to the government unnecessarily.  Sure, I was still doing fine financially despite that, but I told myself that after all I’d been through, I deserved to get a small break just this once, even if that required a bit of cheating.  Now at first glance, under determinist-goggles-ism (certainly hard determinism) the concept of “desert” is questionable if not utterly meaningless.  But I think I had fallen prey to the temptation to privilege my own unfortunate circumstances over any difficulties I might create for others, in a similar fashion to what I described above in the begging example.

To sum up the point of this section, any attempt at modeling people’s behavior in terms of circumstances and “needs” in a pure and even-handed way is practically guaranteed to break down and devolve into a contest over which kinds of excuses are most valid.

III. Empower failures

In my view, the most crucial failure mode of the determinist goggles can be summarized in one word: disempowerment.

southpark_bloodymary.png                        from South Park’s season 9 episode “Bloody Mary” (warning: the episode’s content takes its title far too literally)


Everybody wants to feel empowered (at least in the front of their minds, most of the time).  That goes for wearers of determinist goggles as well.  So determinist-gogglers tend to go through all kinds of mental gymnastics (e.g. semantic manipulation like replacing the term “victim” with “survivor”) in order to emphasize what little control people have over their situations at the same time as granting them a sense of empowerment.  I contend that this is no more than a futile effort to have one’s cake and eat it too.  If one is going to ascribe to a belief system where someone’s action or state is determined by external forces, then an obvious immediate consequence is that someone has less power over their situation.  There’s no getting around it.

I don’t think there’s any need for me to delve too much into why powerlessness is bad.  Clearly it leads to hopelessness in bettering one’s situation as well as a lack of credit for (or ability to be inspired by) those who have (because the improvement in their situation was “just luck”).  Also — and I think this point is underacknowledged, but I won’t dwell on it today — it opens one up to deliberate bullying, oftentimes by extreme free-will-gogglers.

I sometimes ponder how wimpy so many of us would seem to the multitudes of all everyone who existed through the entirety of human history up to very recently (not to mention citizens of many developing countries today).  This may seem like a bit of a deviation from the main thrust above, but bear with me for a few moments and I hope the connection will become clear.  Throughout most of history, almost nobody was as privileged or financially comfortable as most citizens of developed countries are today.  Throughout most of history, many men had to do backbreaking labor for little pay; many women died in childbirth and most of those who didn’t still went through the process in incredible pain; disease was rampant; there was no “health insurance” as we know it and whatever healthcare was available was barbaric and terrifying by our standards and often did nothing to cure the recipient’s ailments anyway; although humankind was free of the hazards of modern technology, both institutions and human interactions were a lot less regulated and most people probably had far more reason to feel unsafe in their day-to-day lives; it was probably pretty common to remain for one’s lifetime within a radius of some 20 miles; and so on.  My presently-existing self for one would probably be pretty traumatized by some of what the average person had to go through even only a few centuries ago.

Today, most of us can’t imagine having to live the way our ancestors did.  Because of advances in science, medicine, law, and so on, we are able to enjoy an enormously higher quality of life, and accordingly, our tolerance for many things has become considerably weaker.  On the whole, this is something to be celebrated.  The kind of progress humankind has made and continues to make in preventing so much suffering and raising our standards and expectations is absolutely the noblest goal we as a species can strive for.  But every major step forward comes at some cost, and undoubtedly one such cost has been a collective decrease in hardiness and fortitude against what our still-chaotic world might throw at us.

Therefore, wherever we’re engaging in the fight for progress, as important as it is to highlight and foster a culture of sensitivity for the plights of those whose lives we want to improve, we should do so with an eye towards also empowering those individuals by allowing the possibility that they may yet pull themselves through adversity and come out stronger.  Ideally nobody should have to make the most out of unfair circumstances, but “ought” and “is” are two different concepts, and lack of fairness doesn’t imply lack of agency: the fact that someone is not to blame for their situation doesn’t necessarily mean that there isn’t something they could (and therefore should) do to better it, or at least to learn how to cope with it as long as it remains to be resolved.

This consideration helps me to understand where Richard Dawkins was coming from in his infamous “Dear Muslima” “open letter”, which touched off the internet war known as Elevatorgate — if you were anywhere near the atheist web around 2011 you may have heard of it.  Not that this puts me anywhere close to agreeing with it, mind you: Dr. Dawkins’ nastily sarcastic overreaction (and subsequent defenses) to what seems to me a mild and mostly reasonable request made by a feminist atheist was unjustified on multiple grounds.  (I’m not going to go easy on a man who holds himself up as a paragon of levelheadedness and rationality at every turn.)  In particular, Dawkins failed to acknowledge the validity of complaining about one particular problem given much worse problems in other places or during other historical periods.  He seemed not to recognize the fact that working hard to improve adverse conditions everywhere — not just at the place and time where they are worst — is nothing less than the face of progress itself and a part of what it means to have high standards.

Yet at the same time, I think I do understand a bit of the frustration behind Dr. Dawkins’ snarky words.  When progress has brought us to a point that our difficulties are tiny in comparison to the problems that were commonplace in recent memory or are rampant in other parts of the world, we should make at least some effort to respect that when we talk about them, to have perspective, to instill a conviction in our audience that although something is unacceptable, we will still be able to deal with it.  After all, look at what so many others have managed to endure.  We’re allowed and even encouraged to be upset about this thing, but talking about it like it’s The Worst Thing In The World when it’s clearly not might start to do more harm than good.  If that was Dawkins’ frustration, then I think he took it out wrongly on the particular activist he was attacking, but I do recognize where it may have been coming from.

On a more personal note, I have been extremely fortunate in the three decades of my life so far in just how little I have had to experience physical pain.  Nearly everyone I know by my age has had to go through a very unpleasant accident or a difficult recovery from some medical procedure, to say nothing of the agony that just about everyone suffered at some time or another in the days before modern medicine.  But I can’t help by worry that the statistical likelihood of facing severe pain bound to catch up with me one day and that due to my lack of experience I might not handle it well.  I remember reading the story of a guy on Reddit whose traumatic accident led to the most painful scene imaginable (which I haven’t the slightest desire to elaborate on) who made a side-comment about how sometimes it was nice to be able to remind himself that no painful event for the rest of his life would be anywhere near as agonizing as what he’d already gone through.  In a weird way, I almost envy this kind of security.

Now imagine a mildly futuristic world in which some humanistic organization is pushing for the creation and distribution of bodily implants which immediately ease even the most minor twinges of pain.  Disregarding possible risks that come from the weakening of our bodies’ natural alarm systems, this would seem like a marvelous step forward from the humanistic point of view: what could be a more worthwhile goal than to lessen suffering?  Now as far as the proponents’ rhetoric goes, the most persuasive flavor will probably strongly emphasize how bad even the most minor physical pain is.  In fact, such conviction will probably be sincere on the part of the most passionate members.  I have a feeling that in this hypothetical world, after enough exposure to such rhetoric, I’d likely grow even more intolerant of pain than I likely already am, perhaps extremely enough to reach the point that maybe even stubbing my toe would seem like a minorly traumatic event.  And I’m afraid that in the event of the pain-reduction-implants initiative falling through, or even if it passes but there are flaws in the implementation (as there always are), I would wind up in a rather weaker position when it comes to dealing with pain than real-me is today.

In my hypothetical sci-fi scenario above, of course it’s still a great idea to go ahead with the pain-reduction technology, and maybe even the most extreme “stubbing one’s toe is unbearable!” rhetoric is worth the potential downside I suggested.  But it should be done in a calculated way that doesn’t disregard that potential downside, and I have a feeling that determinist goggles are pretty likely to blind the wearer to this consideration.

This discussion may appear to have strayed far from my initial characterization of the determinist goggles, but the theoretical connection and empirical correlation are clear as day to me: the more one focuses on external circumstances, the more one relies on external changes to maximize one’s state of well-being.  And while such an attitude has probably been the crucial force behind much of human progress which has succeeded in improving well-being all over the world, it can also have the effect of disempowerment on the individual level.  And it might be good to keep that in mind.

IV. Lack of might makes right

I’ve seen another common abuse of the determinist goggles which alarms me even more, though.  To motivate my perception of it, I’ll start by recalling something I read online a very long time ago — I must have been in high school still — on some right-wing site.  I don’t actually remember where this was or the exact wording, but it was a phrase about liberals that went something like

left-wing principles, where people are judged more according to their grievances than according to their deeds.

This was long before I started framing everything in terms of deterministic vs. free-will-libertarian positions, and I was (and still am) fairly liberal myself, but this snide throwaway line stuck with me and gave me serious food for thought.  I don’t think it’s a fair branding of liberalism in principle, but over the years I have come to suspect that the American Left is being guided by a giant pair of determinism goggles, and the “judge people according to their grievances” mentality does seem like an easy trap to slip into if one gets overly dependent on them.

The pure, idealized version of a moderately determinist-leaning viewpoint is the assumption that there are hidden external forces behind people’s behavior, so we should refrain from giving them all the credit for doing well and go easy on them when they do badly.  To the extent that it makes sense in the first place to assign praise and blame to people for their actions, possible causes outside of their control should always be entered into the equation.  In particular, if we see someone doing poorly at life, we should cut them some slack and lean towards believing that they really are putting in a laudable amount of effort despite the fact that on the outside they look like they’re doing poorly.  It only takes a short leap of logic to go from this to the belief that someone is laudable because on the outside they look like they’re doing poorly.  (Or conversely, the belief that someone is automatically deplorable because on the outside they look like they’re doing really well for themselves.)

And I do see evidence in many people’s rhetoric of this logical leap taking place subconsciously.  This worries me particularly in the case of someone openly exposing their own tendencies towards unproductive behavior or general difficulties in coping (e.g. severe anger) in a boastful tone, as though it’s somehow a virtuous trait in itself rather than an understandable reaction to something tough they’re going through.  In some extreme cases, this declaration seems to be performative rather than truthful, a thinly-veiled form of bragging and vying for status.

I hesitate to make this point, because I’m afraid it could be easy to misunderstand and get taken very badly, but I see a crucial difference between recognizing that someone is virtuous in spite of their weakened state and deciding that they are virtuous because of it.  It’s the difference between being able to receive empathy and understanding for one’s failings and being compelled to cling onto a righteous indifference towards overcoming them.

I can’t think of any popular quote which annoys me more profoundly than this one.  I blame it on determinist goggles.

Determinist goggles may seem like a much more enlightened and progressive alternative to their libertarian-free-will counterparts, instilling empathy and compassion in those who glimpse through them.  The world through determinist goggles appears at first glance to be one where everyone is just doing whatever they can just to muddle through and should be understood rather than morally judged for the situations they wind up in.  But on viewing reality this way long enough, one learns to lose hope for actual remedies to the variety of problems being faced by humankind.  It is a world where the only practical way to live is to assume that some people do have genuine agency and that the rest are powerless to do anything other than wring their hands and sit around waiting for those free agents to act.  Taken to an extreme, this reality will eventually devolve into a society where the weak are assumed to hold innate moral superiority over the strong even while the very categories of “weak” and “strong” can only be defined relative to each member’s point of view, and such a society cannot hope to function.

Failure modes of libertarian* goggles

*metaphysical (not political)

[Content note: more musings on cause and effect, free will, and moral responsibility, touching on the rationale behind sins ranging from general ableism and classism to greedy cookie-grabbing.]

Now it’s time for me to delve deeper into what I have called before the “libertarian-free-will mindset” and the “deterministic mindset”.  In this post, I explained what I mean by these competing concepts and emphasized that I see them as a major component of almost every disagreement and debate.  At the time I was using the clumsy terms “free-will-leaning” and “determinism-leaning” in referring to them, but more recently I’ve come up with “libertarian goggles” (with the understanding that we are using “libertarian” in the metaphysical rather than the political sense, though political libertarians are probably gazing through these a lot of the time too) and “determinist goggles”.  I think this is less awkward… or maybe it’s more awkward, but for now I’m going to stick with them anyway.  The word “goggles” in each case is intended to stress that I’m gesturing towards a way that people tend to perceive things: again, all of this has very little to do with anyone’s abstractly-held philosophical position (which they may not have developed anyway), but the assumptions they tend to make when sizing up a situation involving human behavior.  Someone can study philosophy and decide that the arguments for incompatibilist free will are the most valid, while in their day-to-day lives they tend to excuse people’s behavior as the function of their background and environment; someone else can behave likewise without having ever given a single thought to the academic philosophical issue of free will in the first place.

Today I want to explore the flaws in the view through libertarian goggles and exactly how it may affect a person’s judgment or the rationalizations they construct for it (I intend to get to my issues with determinist goggles in the next post).  This may sound like the premise for another dispassionate essay written in a cerebral, sitting-back-in-my-armchair-musing voice.  Actually, this essay feels to me like more of a personal rant than it might seem on the surface from my tone.  I’ve known people who rarely seem to take their libertarian goggles off, and they frustrate me.  In several cases, I feel that I’ve been directly victimized by them via this mode of thinking.  In my experience, they are prone to several logical fallacies which might not necessarily from the free-will libertarian premise but which I am going to speculate are at least strongly correlated with a tendency to view the world through nondeterministic-free-will-ish assumptions.  I don’t promise any kind of clinching argument to show that this is the case; I’m mainly just going to describe my observations of such a correlation.

Before I begin, I should of course make the obvious disclaimer that I realize humans are complex and it would be foolish to imply that they fit neatly into the categories of “libertarian gogglers” and “determinist gogglers”.  But I’ve known a number of people who appear pretty far in one direction or the other even while this doesn’t apply to anywhere near everyone, just as I’ve known a number of people whose political views align clearly with the right or the left while at the same time a lot of other people are centrist.

I. What do libertarian goggles do, exactly?

The “libertarian goggles” I speak of are a metaphor meant to describe a certain way of viewing the world.  Libertarian goggles cause the wearer to see most conscious actions as completely free choices which bring with them moral responsibility.

The world viewed through libertarian goggles looks like a bunch of people choosing to do bad things and then placing the blame for their behavior on the folks around them, or their genetics (in particular, sickness or disability), or Society, or The Government, or This Bad Economy, instead of on themselves for not trying harder.  Nobody is bound by the conditions they find themselves in; therefore, to pretend that they are amounts to excuse-making.  It’s all a matter of attitude, plain and simple.  And the allure of being able to explain their own failures away by putting them on other people or on supposedly extenuating circumstances is strong enough that it blinds them to all of the agency they really have.  This is the main reason why so many others (who lack libertarian goggles and the wisdom they bring) wind up learning to explain so many things with a deterministic model.  These people would be so much better off, so much more able in their own lives and in not damaging the lives of others, if only they would see this, because understanding one’s own freedom empowers one to overcome any difficulty.

People failing at things don’t make up the whole world, of course.  There are plenty of people who do really well, and they manage this mainly through pushing themselves and not falling into the rut of blaming the rest of the world for their difficulties.  Where they start out and just how much effort they have to exert to pull themselves up is essentially irrelevant when it comes to bestowing praise: someone from a less fortunate background has just as much ability to move upwards as anyone else, and the fact that they start out in a lower position only means more reason that they ought to work hard to improve their situation.


It follows from what we see through the libertarian goggles that doling out pep talks rather than sympathy to those who are struggling seems like a reasonable way to go.  And if someone asks for our help when they clearly could be pushing harder to better themselves instead of trying to leach off of other people, then naturally we will be reluctant to help them if in doing so we slow ourselves down on our own journey upwards (which tends to be a risk of sacrificing for someone else).  In fact, helping them would actually be doing them a disservice, more often than not, because it would instill a reliance on other people for support rather than on their own willpower.  Surely it’s much more virtuous to stand back and let those who are in trouble hit bottom so that they have no choice but to learn how to bounce back from adversity on their own steam.  Tough love, and all that.

Needless to say, I’ve seen that last idea held up as the rationale for any and all forms of bullying.  A society which places faith in libertarian goggles can be a pretty harsh place.  But might this framework still lead to a coherent model, however harsh, of the world in which we live?

Well, it’s clear enough to me that the model is fundamentally flawed and does not lead to a full picture of the territory.  The sort-of-tautological criticism that can be made against the libertarian-leaning view which stands out immediately in the context of other things I’ve written here is that the libertarian goggles act as a shield against incoming gadflies whispering deterministic heresy.  That guy over there seems to be acting irresponsible for no apparent rational motive and claims that he’s trying to stop acting that way but has trouble controlling it for some reason.  His behavior doesn’t match the symptoms of any of the three or four disorders that we acknowledge exist.  Without libertarian goggles we might be receptive to at least the vague suggestion that there’s something wrong with the guy — something deep in his wiring and therefore not entirely under his free control — which doesn’t have a name yet (or that his behavior might be influenced by his upbringing or how society treats one or more demographic categories that he fits into etc., all aspects of his life we’re not in a position to understand right now).  The small handful of very well-known conditions which we do recognize weren’t always known; they had to be discovered through the study of medicine.  There are undoubtedly many other such conditions which still haven’t been discovered.  But once we’re wearing the libertarian goggles, we’re no longer open to considering such a notion; we’re sticking with our very conservative List of Known Conditions and it doesn’t occur to us to seek any explanation from outside of it.  We will instead throw up our hands and invoke a sort of mysterious Agency of the Gaps to explain away his behavior… which is just another way of saying that we’ll conclude that it’s his own damn fault, because somehow he chooses to have a weaker will or less moral fiber than we have.

This is obviously not the right approach for investigating empirical reality, for a number of reasons including those outlined in my post about “gadfly speculations”.  (The analogous criticism is equally valid, of course, for the determinist-goggles mindset, but I’m waiting until another day to pick on that.)  However, my issue with the libertarian-goggles approach extends beyond its inherently fallacious nature to the fact that in my experience it brings with it a bundle of other wrong-headed attitudes, which I attempt to describe below.

II. The strong link in the chain

I wanted to separate under several headings my criticisms of what I consider to be harmful ways of thinking that come from wearing the libertarian goggles for too long, but I couldn’t help but realize that they’re all just different ways of looking at the same core idea.

I remember once hearing a little parable that had been passed among the philosophy students at my university, which went like this.  In some fictional philosophy department (at least, obviously not ours), it is arranged for there to be a large plate of cookies laid out each day as free food for the poor undergrads and grad students in the department.  (Note that there are always many more students than cookies, but I suppose the point is that there should always be some food resource during the day, however small, for the most exhausted students to dip into to keep their sugar levels up.)  But many of the faculty members have a habit of grabbing cookies for themselves, even though they are obviously meant for the students.  One day there are ten cookies on the plate, and ten professors each take a cookie.  In this way, the cookies disappear one by one, so that by the late morning they are all gone and none of the philosophy majors who wanted a little sugar rush have had the opportunity to get any.  The students are understandably annoyed and looking for someone to blame.  The thing is, it wouldn’t have bothered anyone if only two or three of the cookies had disappeared.  It wouldn’t even have been such a problem for nine out of the ten cookies to be gone: the state of greater-than-zero cookie material being available for snacking emergencies is all that mattered.  So it’s tempting to blame the professor who took the last cookie.  But somehow it doesn’t seem fair to put all the blame on her.  The philosophy students spend so much of their energy arguing about it that soon they are all in dire need of a pick-me-up, which is a problem because meanwhile there are still no cookies and they’re no closer to agreeing on who should be held accountable.

I hypothesize (with all the weight of my authority as an armchair social psychologist) that the philosophy majors who are libertarian-gogglers are distinctly more likely than the rest to put the default blame on the professor who took the last cookie.  I call this “privileging the final link in the causal chain”.  In our cookie example, the causal tree looks pretty simple, with all of the choices leading up to the final result being independent.  It can be drawn as a directed graph looking something like this (time flows left to right; the yellow node represents the “no cookie material left in plate” state).


In our real world, most causal trees look more like this, with many choices influencing other choices.


At each node (except yellow one on the right, which represents a final state), some agent made a decision, for better or for worse.  In my opinion there’s actually a pretty straightforward way to quantify the rightness or wrongness of each of these decisions.  But that doesn’t tell us anything about which decision-maker ultimately deserves praise or blame for the final state, because as I’ve argued before, agency does not imply moral responsibility.  It follows that it is dubious to assume that the main responsibility lies with, say, one of the agents at the nodes directly pointing to the rightmost one above, just because those decisions happened after the other ones which influenced them.

But I find myself in disagreements with certain people — and from my observations those people are the kind that are wearing libertarian goggles a lot of the time — who seem to assume without questioning that the responsibility lies entirely with whomever made the most recent decision (indicated by red nodes in the pictures above).  And I’ve found it hard to convince them otherwise.

This particular fallacy takes on many variants.  One of them is putting primary responsibility on someone (agent X) for choosing one out of the only two actions A and B allowed to them by someone else (agent Y).  Agent Y may have foisted upon X a choice between two equally unethical actions A and B, and it seems clear enough to me that Y deserves somewhat more of the blame than X does for whichever one of them X chooses.  But I remember once discussing such a philosophical thought experiment with a group of my colleagues over lunch, where Y is somehow forcing X to choose between killing two people, and I was surprised at how many of them thought it obvious that X was primarily guilty for whichever murder results, having been the one to actually pull the trigger.

Another variant, fairly common in political discourse I think, is the notion that it’s fair to judge agents equally for performing the same action with the same result, even if these choices were influenced by very different sets of circumstances.  This disagreement can take place even between people who agree that both agents should be judged negatively but differ on what should be the appropriate magnitude of punishment for each of them.  A typical dialogue between a Libertarian-Goggler (abbreviated LG) and a Devil’s Advocate (abbreviated DA, whom I consider the hero in this scenario for playing the gadfly role) might look something like this.

LG: I propose that the law against committing A should be applied equally to citizens X and Y, since they both did it to the same negative effect.

DA: But is that fair?

LG: What do you mean, “fair”?

DA: Citizen X was essentially manipulated into doing A, while citizen Y made a very conscious choice to do it.  They both need to be penalized, but surely it’s unfair to treat X as harshly as we treat Y, given X’s extenuating circumstances.

LG: It’s perfectly fair.  Both chose to do A when they could have made a different decision, do you deny that?

DA: No, but consider the fact that citizen X didn’t hadn’t been properly exposed to all the facts surrounding the ramifications of doing A, was in a more desperate situation that made A harder to avoid, and had less beneficial alternate choices at his disposal than citizen Y did.

LG: Could Mr. X have made himself less ignorant if he’d tried hard enough to learn the relevant information?

DA: Well, technically yes…

LG: And could Mr. X still have overcome temptation to do the right thing, which would still mean not doing A, even if the other choices were less than ideal?

DA: Yes…

LG: Then both should be held equally responsible for what they did.

DA: Come off it!  You know the situations aren’t equal, so it’s unfair!

LG: Were their choice-making capabilities not equal?  Either you have free choice or you don’t… [And so on.]

My claim about where libertarian-gogglers tend to stand in such debates begs the question of why the libertarian goggles should influence one’s thinking in this way.  Before writing all this out I imagined introducing my answer to this in a big reveal that might sound clever.  But actually, since I opted for abstract descriptions without real examples, the way I’ve written it already renders the connection pretty obvious.

Libertarian goggles impede one’s ability to recognize the legitimacy of circumstantial factors in choice-making.  Since they highlight freedom in choice-making abilities, external influences (genetics, upbringing, physical/mental conditions, surrounding societal forces, etc.) fade into the background.  When someone decides to do something, an observer wearing libertarian goggles sees the event of that choice clearly without considering the backdrop of events leading up to it.  Such events include other nodes in the causal chain, or restrictions placed on the choice-maker, or aspects of the life of the choice-maker which have led them up to the point of making said choice.  The scenarios I laid out above are all variants of this kind of blind spot.

Now libertarian goggles don’t render the wearer completely unable to perceive the presence of extenuating circumstances surrounding a decision.  Libertarian-gogglers (or at least most of them) aren’t so delusional that they entirely refuse to acknowledge that certain conditions or prior events might make things easier or more difficult for the people they’re observing.  What they refuse to acknowledge is the notion that such factors affect in any way the essential freeness, and therefore the attached moral responsibility, associated to the choices themselves.  In other words, even if they view the factors as factors in some physical or psychological sense, they don’t fully recognize their influence in a metaphysical sense with ethical import.  I assume there’s some limit to the degree of distortion provided by even the strongest libertarian goggles out there — for instance, hopefully the wearer would recognize that the classic scenario of having a gun held to one’s head is a factor that sharply reduces autonomy and the weight of moral responsibility.  But I often suspect it’s the case that the distortion can be severe enough in a number of cases to stretch the wearer’s perceptions to the edge of what “basic social common sense” allows.

The upshot is that the libertarian-goggler will survey an event that resulted from human choices and zoom in on exactly one of those choices that lead to it.  Which of course is exactly what I was arguing against here, via some sort of “argument by symmetry” showing that there are no grounds for arbitrarily privileging one node over others.

III. Reaching the logical conclusion

I’ve already alluded to the obvious potential for certain malicious types of bullying that can arise from abusing the guidelines outlined above for navigating life via the libertarian-free-will route (which I would describe as a very narrow path with “tough love” on one side and “overt disgust for those doing worse than you do” on the other).  Let me now mention a related nasty behavior that just naturally appears at the end of the path the libertarian-goggler travels on.  It is often colloquially referred to as “victim-blaming”.

Victim-blaming occurs when a crime is committed against someone and that someone, the victim of the crime, is met with moral judgment for having failed to act more wisely in order to prevent that crime.  I claim that this unfair reaction is essentially the natural logical conclusion of libertarian-free-will-colored thinking.  It is precisely what can result from a habit of wrongly isolating particular agents in a complicated situation as the bearers of primary responsibility.

In my experience, true, explicit victim-blaming (as in actually placing the blame on the victim rather than merely pointing out that they would have been better off doing something differently) is relatively rare on the individual level, although many other types of rhetoric are easily mistaken for it.  However, I’ve definitely seen plenty of the more abstract variant of blaming a governing system for not managing to sufficiently enforce rules against things that will hurt them.  On a small scale, this can take the form of cheating and finding various minor shortcuts that go against the rules because of the unlikelihood of getting caught and/or light punishments for those who are, and then defending one’s behavior on the grounds that “if they cared that much about us not cheating they would do a better job of enforcing the rules”.  Here someone is ignoring all the potential constraints on the rules enforcement system which is running things, and ultimately putting the blame on the people running that system, rather than themselves, if things go badly (that is, if they cheat and this harms someone else).  And yes, a few years ago I began to notice that the people I knew who seem to explicitly or implicitly endorse rationalizations of this kind were the ones whose views on current issues seemed the most influenced by libertarian goggles; in fact, I think that very observation is what started me down my train of thought that has led me to writing this post today.  And I’m willing to bet that there’s a strong correlation between full-on victim-blaming and libertarian goggles as well.

But lest I sound like I’m preaching from a high horse, I can definitely point to a blatant example of this behavior in myself.  During my first year in the city where I currently reside, I was living just outside of the city and was very dependent on public transportation.  A lot of my daily movements were within official city limits, but my home was just outside of them.  Very annoyingly, the monthly fee for transportation passes costs twice as much when including the zone surrounding the main city.  Each month I paid to recharge my transportation card for within city limits only, with the intention of finding ways to avoid using the one last bus that went outside the limits to take me to my apartment.  But eventually I succumbed to laziness and developed the habit of taking that bus anyway, despite the fact that I had no valid ticket for it.

The bus system here is run in such a way that nobody ever checks for tickets except for controllers who only board buses very occasionally.  The fine for being caught without a ticket is some 30 Euros.  My decision to pay for only the cheaper monthly pass and ride dirty for that one route proved to be a rational one from the point of view of personal finances: visits by controllers are so rare that I was only caught once during that year, and the 30-Euro fine I paid was far less than the money I saved by not paying for extra-urban access.  But more interesting was the way I constantly tried to justify this choice of breaking the rules on an ethical (rather than rational-self-interest) level.  I kept pointing to the fact that obviously this city does a very lackadaisical job of enforcing payment for bus rides.  And somehow I convinced myself (and even still sort of halfway believe) that if I was doing harm by getting illicit rides on that bus, it was really the city’s fault for failing to deter me from it — this was often dressed up as “the fact that they clearly don’t try that hard to enforce it means it must not matter very much to many people”.  Never mind the fact that the city legislators probably have their hands tied in ways I can’t imagine, or that it’s those who are lower on the rungs of the economic ladder rather than the rule-makers that were likely to suffer indirectly from my actions, etc.  Sometimes selective blame-assigning can be so tempting.

If you have read up to this point, you may be objecting that what I’m describing in this section doesn’t really follow as a natural conclusion of what I detailed previously with a node directly pointing to the rightmost one being singled out as the ultimate cause of some effect.  Surely when a crime is committed, the final decision involved in the causal chain is that of the criminal or rule-breaker, so the libertarian-goggler would blame the criminal rather than the victim?  My answer is that I’m not contending that the libertarian-leaning mentality dictates that one should necessarily point the finger at the one whose decision came temporally last.  My thesis is that the libertarian-leaning mentality disregards the difference between agency and ultimate responsibility and singles out one decision as carrying the moral weight, based on whichever one is most convenient to single out.  In a lot of contexts, this is one of the decisions which comes temporally last with an arrow pointing directly to the effect, because absent other deciding features of the situation this may seem like the canonical choice.  In other contexts of personal involvement, the most convenient agent on whom to load the blame is one which is definitely not you or your friends, and preferably one which is remote and faceless (e.g. “the city” for not disciplining bus-riders effectively).

The point is that the libertarian-goggler, wanting to focus on someone’s freedom of choice not bound by other forces, finds themself in the tricky position of selecting one node in the diagram as representing freedom in the truest sense, because the model of absolute freedom begins to break down when considering more than one agent in the same picture.  And sometimes this means they have to be a little bit arbitrary in their selection process.

The world through libertarian goggles can appear an exciting and beautiful place, where everyone has indefinite unfettered potential and is empowered to overcome any seeming obstacles in their way to achieve what they desire provided they desire it badly enough.  But one consequence of denying that difficulties can be legitimate hindrances is that we all feel entitled to withhold help from the less successful lest they drag us down instead of pulling themselves up as we want to keep doing for ourselves.  In the end we face the danger of finding ourselves in a world where blame is bestowed entirely on those who fail for their failures regardless of unfortunate circumstances; credit is doled out only to those who succeed regardless of luck and privilege; and those who climbed their way to the top through whatever means they could get away with feel justified in looking down upon those whose heads they stepped on.  In short, it is a world which legitimizes the domination of the weak by the strong.

Disagreements are like onions II

(or “Why we shouldn’t put all our arguments in one rhetorical basket”)

[Content note: Pulse shooting, homophobia, Islamophobia, gun issues, fundamentalist Christianity, and, sadly, more Donald Trump. A bit on the disjointed side, and perhaps best read as three separate sub-essays.]

As the title suggests, this is a direct follow-up to my last post, “Disagreements are like onions“.

I. Separation, period

…What was I saying? Oh yes, I think all of this can be generalized a little further. In the other post, I suggested that we should make a priority of separating the object level from the meta level, or different “degrees of meta”, when analyzing a given disagreement. One obvious challenge that could be raised against this thesis is whether for any two “layers” of an argument one is really more “meta” than the other in some obvious way. For instance, in the example I gave in the other post about separating the possibility of Trump not being the rightful president from the possibility that his executive orders were wrong, it doesn’t seem that clear whether “legitimacy of election” is the meta-level issue while “morality/legality of executive action” is the object-level issue or vice versa. And it doesn’t really matter — the arguments I was giving were for separating the two, without necessarily applying any particular asymmetric treatment to them.

So the moral of the story as I see it is even a little simpler: just try not to conflate different layers. And now, “layers” is not meant to imply hierarchy with respect to any axis. Considering this in terms of object/meta level distinctions was useful, because it seemed to me that an awful lot of this conflation was between layers that differed in levels of meta-ness, but this isn’t always so.

When we strip away all the talk of object and meta levels and just talk about “levels”, the primary reason for the fallacy becomes even more apparent. A person who is defending a position with many levels is often tempted to throw all of their eggs into the basket of their favorite one, which is often the one which feels easiest to defend.

Although this behavior seems extremely common and I’m sure I’ve been guilty of it plenty of times without realizing it, some of the most blatant (and kind of hilarious) examples of it which come most easily to my mind involve fundamentalist Christian apologetics of the most extreme and crackpotty kind. For instance, I remember hearing an open-air preacher on a university campus who was carrying on, in his slow, booming voice, by giving a rendition all of what he considered to be the principal sinful behaviors of us students. It quickly became clear that homosexuality held a position of special status among this horde of evil lifestyle choices, because apparently every single other one was a special case of it. “Extramarital relations is what happens when you give in to your baser passions, so that is a form of homosexuality. Same with pot-smoking, so that is a form of homosexuality. Social Darwinism is also a form of homosexuality. Being a Democrat is a form of homosexuality. Mormonism is a form of homosexuality…” And so on and so on. Now the issue of same-sex attraction isn’t in any obvious way more or less “meta” than questions surrounding these other supposed evils. But it was certainly a hot-button issue at the time as well as evidently this preacher’s specialty, so it was convenient for him to frame absolutely every idea he wanted to attack in terms of homosexuality.

(On a purely comical note, I’m reminded of a Canadian friend who facetiously explained to me that where he grew up, not only do bears represent the epitome of danger, but every threatening thing up there is in fact, at least in some indirect way, a form of bear-ness. As far as I’m concerned, this assertion is really no less ridiculous than that of the evangelical preacher above.)

And while extreme fundamentalist Christians are on my mind, does anyone remember the young-earth creationist Kent “Dr. Dino” Hovind?  His “doctoral dissertation” is available in pdf format online and is another quintessential example of bundling all of one’s ideological opposition into one narrow category.  Apparently, every non-Christian idea that Hovind disliked was yet another face of the “religion of evolution”, throughout all 6,000 years of our world’s existence, from Cain and Abel to the ancient Greek philosophers to Galileo to the origins of Communism.

But atheists have been known to engage in this kind of thing as well.  Around 2012, there was an attempt made by part of the atheist community to splinter off into a group called Atheism Plus, comprised of atheists who wanted to stand up for certain specific humanitarian values outside of the very basic brand of humanism that generally goes hand in hand with a positive lack of religious belief.  Although this new movement was advertised by luminaries such as Dr. Richard Carrier as being based simply upon the sentiment that as a group they should stand up against bad behavior on the part of members of the mainstream atheist community, it seemed clear pretty early on that the intent was to bind atheism together with the beliefs of the then-emerging online social justice movement. I can’t help but feel that by attempting to make such object-level beliefs an inherent part of what it meant to be an atheist, the advocates of Atheism Plus were muddying the distinction between the core of a skeptical belief system and adherence to the particular social and political ideas that they liked. I considered the attitude that an atheist committed to social justice shouldn’t be willing to march for secularist causes alongside other atheists who didn’t see exactly eye-to-eye with them on all social issues to be divisive, and I feared that it would weaken both the battle for freedom from religion and the battle for social justice. And it seemed clear that a lot of this arose from a desire (conscious or subconscious) to sneak in a lot of specific tricky, controversial views under the banner of general skepticism, which is a much more easily defensible value at least in a room of committed nonbelievers.

One Atheism-Plus-related essay that stuck in my mind was this manifesto (long, but altogether quite an insightful and relevant read for this discussion, although ultimately I disagree with it).  Here is a particular excerpt whose essence stayed with me years later:

I saw in skepticism a great deal of potential, too. It was a community that had until recently been very much based in the “hard” sciences and in addressing the more objectively falisfiable beliefs that people held, like cryptids, UFOs, alt-med and paranormal phenomena. But I saw absolutely no reason that skepticism couldn’t be compatible with the social justice issues I also cared about, like feminism. I saw in feminism a lot of repeated mistakes made due to a lack of critical inquiry and self-reflection, and rejection of the value of science and that kind of critical thought, and I also believed that a whole lot of what feminism, and other social justice movements, were trying to address was very similar kinds of irrational beliefs and assumptions, stemming from similar human needs and limitations as beliefs in the paranormal. Misogyny, sexism, cissexism, gender binarism, racism, able-ism… these things didn’t seem meaningfully different to me from pseudo-science, new age, woo, religious faith, occultism or the paranormal. All were human beings going for easy, intuitive conclusions based on what they most wanted or needed to believe, and on what most seemed to them to be true, without that moment of doubt, hesitation and humility that skepticism encourages.

What I felt skepticism could offer all of us, in enabling us to cope with our faulty perceptions and thought, was a certain kind of agency. An ability to make a choice about what we believe instead of just going with the comfortable and most apparent truthiness. And in allowing us that agency, in allowing us that choice… we could make the right choices. Instead of settling for what we are, how we tend to see, think and believe… we could try to be something better. We could look to what we could be, to how we could see, think and believe.

In other words, the writer, Natalie Reed, saw certain social justice stances as following from the same skeptical mindset from which atheism also follows and therefore as a necessary biproduct of performing atheism “the right way”. To me, this seemed in tension with what she said in the very next paragraph about freedom and ability to choose beliefs; clearly, Reed saw only one right answer to certain non-deity-related questions and was frustrated that the atheist community as a whole was failing to embrace it.  Here she didn’t come across to me as possessing the Theory of Mind to see that the skepticism that might lead others to non-belief in gods might not lead to non-belief in all of the other things she was skeptical of, or that other skeptics might even consider parts of her socially liberal ideology to be examples of “truthiness” which deserve more skepticism.

Anyway, to leave the arena of religion for more mainstream politics, I’ve also seen left-wing rhetoric along the lines of “being pro-gun is wrong because if you think about it, the presence of guns stifles free speech, which is one of the pillars of our democracy”.  To me this argument appears to be reaching pretty far by making a pretty indirect connection between gun control and a more popular and easier-to-defend American value.  I’m sure that this kind of argumentation is pervasive in right-wing spaces as well — probably lots of bending-over-backwards interpretations of various proposals as boiling down to “more government control” or something like that — but having had very little exposure to those spaces during the last decade, I don’t really know. I see no reason not to suppose that it is present in most ideological communities.

II. Another reason not to draft all arguments as soldiers

In this more general context of separating layers, my point (2) under section III of the last essay (“Upholding a principle that belongs to one ‘layer’ of the disagreement only on grounds of being in the right at another ‘layer’ isn’t upholding the principle at all”) reminds me a lot of something I wrote on my tumblelog (my Tumblr blog) back last August.  I link to it here and insert a more up-to-date revision of it as follows.

One major thrust of the rationalist approach to winning arguments is to avoid the “arguments are soldiers” mentality — that is, the attitude that every argument for one’s side of a debate, whether good or bad, is an ideological weapon and all must be deployed if one is to win on the political battlefield.  The argument against using arguments as weapons is itself a call for separating the object from the meta, but I see another objection: namely, that the use of “arguments as soldiers” oftentimes implicitly weakens the good arguments for one’s own side.

To give an example of this, I’m afraid I’m going to dredge up a horrible event from last summer: the Pulse shooting (~50 people killed at an Orlando nightclub).  I was traveling at the time it happened and wasn’t able to research all the updates on what was or wasn’t known about the killer hour by hour, so for a few days I was relying on what was popping up on my Facebook newsfeed.  As tragedies go, this one was especially tricky to respond to rhetorically because in the immediate aftermath, as there were so many potential political elements of it pertaining to all sides: in particular, Islam, homophobia, and guns.

Within a day, my Facebook was blowing up with articles giving particular views of the very sparse information we had on the killer at that moment.  The main two groups contributing to the political discussion seemed to be liberals who wanted to play up his homophobia and conservatives (as well as a few anti-Islam liberals / libertarians) who wanted to play up his Muslim-ness.  At the time, judging from preliminary reports I saw trickling in, the levels of both of these traits were unclear.  There were rumors in the early hours of the aftermath that he himself was a regular at the club, and that he had a gay dating app on his phone.  Meanwhile, while it was clear that he was a Muslim, he was raised in America, it wasn’t so clear exactly how strong his ties to ISIS and “radical Islam” were.

I’m going to focus now on the emphasis on the killer’s homophobia, mainly because the people pushing it were the ones on “my side” of most issues and vastly outnumbered the others anyway.  Now there’s nothing wrong in the fact that people were focusing on his homophobia.  After all, it’s extremely important to investigate exactly why someone would perform such an evil act, and it’s completely appropriate for us to feel outraged if part of the motive came from such vile bigotry.  And in fact, it looks like these people turned out to be right: he did choose a gay nightclub out of a desire to attack gays, and he certainly wasn’t a regular or openly gay, etc.  But suppose the evidence had come out differently: would it weaken the gay rights cause in any way?  It would not make gay rights one iota less valid if this guy had shot up a gay club out of pure sadism rather than directed bigotry.  I guess maybe it would make the gay rights cause seem an iota or two less worthwhile, because some of the practical value of a cause lies in how many lives will be affected by it (there’s some importance in demonstrating that homophobia kills).  But I’m going to suggest that even that is only affected a tiny bit, since those 100 lives are still a pretty small fraction of all those who have been killed for being somewhere on the queer spectrum.  My point is not that I was bothered by so many people drawing attention to it (after all, as I have said, this was absolutely appropriate and essential), but that there was this almost-desperate underlying tone implying of “see, this is why homophobia is bad, and this is why gay people deserve equal rights”.  I know that wasn’t actually what anyone was saying or probably even thinking, but that tone does in my opinion sort of communicate an attitude that the validity of gay rights is conditional on exactly which tragedies have arisen from not acknowledging them: if new evidence were to come in showing that the killer wasn’t anti-gay, then where would that leave us?

This reminds me of the common tactic that atheists use in debate where they make a big point of how many lives have been destroyed in the name of religion, implying that this is why religion is incorrect.  I’ve actually seen Richard Dawkins open a debate on the existence of God with this strategy, then backtrack when he sees his debate opponent is formidable at rebutting that point, saying, “But counting up the number of lives lost due to a particular ideology doesn’t really matter anyway; all I care about is which belief system is true!”  (Unfortunately I can’t recall which debate this was, but I wouldn’t be surprised if it happened more than once.)  Well then, Dr. Dawkins, why didn’t you start by arguing that way in the first place?  In this failed rhetorical maneuver, Dawkins has actually damaged the argument against religion as being antithetical to the objective pursuit of truth by implicitly making this point of view seem delicate, as thought it needed to be backed up by statistics on the number of deaths resulting from the failure to choose secularism.

Or, to give another example from the 2016 election campaign, I noticed that many people seemed very anxious to show that Donald Trump was never a competent businessman at all, as though that was the main factor relevant to his candidacy.  As far as I know, a lot of the memes supposedly demonstrating that he hasn’t actually done anything impressive with money were misleading, but I couldn’t actually care less either way because I saw much, much more crucial indications that he was not fit to be president.  I realized that there was some sense in trying to rebut the supporters of Trump who painted him as a savvy businessman, but displaying it in the front and center of the anti-Trump case seemed to me like a confusion of priorities and actually sort of validated the pro-Trump contention that being successful at business qualifies someone for the presidency.

To summarize, when arguments are used as soldiers in this way, it not only often leads to bad arguments being used, but it weakens other, extremely valid points supporting on the same side.  Then if the bad arguments are eventually knocked down, there’s not quite as much left on display in support our cause as there would have been if we had stuck to emphasizing the core reasoning behind it in the first place.

In other words, putting all one’s rhetorical eggs in a single basket (i.e. a particular aspect of one’s worldview) is a risky business.  At worst, the basket will break and the rhetorician will lose the whole debate despite the fact that some of their other stances were valid.  And at best, the single idea they’re classifying everything else under will come out looking correct, but sneaking all the other ideas in under it might come across as shady and underhanded, and those other ideas might not get the acknowledgment or credit they deserve.

III. A postscript on the March for Science

Tomorrow a lot of my American friends will be participating in a march which is purportedly a protest against the new presidential administration’s blatant disregard for some of the less popular findings of science in favor of pseudoscience and general “truthiness”.  While I am all for the original cause of this demonstration, I tend to have misgivings about protests in general.  A lot of these misgivings have something to do with what I’ve been discussing above: it seems that such protests are often billed as being about something at least sort of specific, but then a bunch of other statistically-correlated beliefs wind up getting lumped in with the original cause.  This appeared to be the case for instance with the American “Occupy Wall Street / 99 Percent” movement in the earlier part of this decade, for instance (inasmuch as that movement started out with any specific position in the first place).  It was also apparent at the Women’s March back in January (hello, intersectional feminism!).  I’m not saying that I was actually against any of these demonstrations, and in fact I think that at least some (such as the Women’s March) had wonderful effects.  But I’m bothered by the fact that such protests have a tendency to devolve into a shouting platform that enforces the clustering of a whole bundle of political positions rather than a unified, focused, and concretely-reasoned push for a particular goal.  I’m a member of a Facebook group dedicated to the March For Science, and I’ve certainly already seen a lot of posts there championing areas of science, or even tangential science-related causes like better representation of minorities, etc., which don’t seem directly relevant to the main crises at hand.

That said, the theme of this particular event, Science, is itself of interest when considering the issue of “separating layers”, because the spirit of Science seems in a certain sense to uphold the opposite value to the one I’ve been preaching here.  That is, the idea behind Science is that we are trying to explain empirical phenomena in terms of the most elegant possible models based on natural laws which apply universally.  In other words, Science is on some level all about not considering different questions independently.  For instance, it is often pointed out that to be consistent in one’s denial of biological evolution, one must also deny the validity of a wide range of scientific areas including geology and particle physics.  So I can’t really fault all the posts I see along the lines of “I march because without science we wouldn’t have the medical technology to treat my leukemia!”, even though it would be unfair to directly imply that support for the strains of pseudoscience peddled by the current administration automatically implies opposition to improving the lives of leukemia patients.  After all, the same respect for the scientific process that has led to so many widely celebrated inventions and breakthroughs ought to be applied when it comes to more politically controversial scientific findings as well.

Anyway, it will be interesting to see exactly how tomorrow’s event shapes up.  I guess that as far as my insistence on “separating layers” applies to this situation, I would say that it’s important to realize that it is possible for intellectually honest people to disagree with the scientific consensus on some (object-level) issues without necessarily opposing the (meta-level) values of the scientific process itself.  However, those of us who feel worried about what appears to be a pervasive disregard for science, who feel that people who hold to popular “truthy” beliefs not supported by scientists while otherwise tacitly supporting the scientific process are oftentimes operating on an inconsistent belief system, are certainly quite justified in wanting to engage in peaceful demonstrations against these worrisome modes of thinking.  Or at least as justified as I am in wanting to write long, rambling blog posts about what I consider to be worrisome modes of thinking.

18033965_10213215715533949_4007728607424155360_n(credit to Kendra Hamilton on Facebook)

Disagreements are like onions

[Content note: this is another attempt to convey one of those fundamental ideas which I feel strongly about deep down but is still a little hard to communicate, so I once again erred on the side of long and dry.  Part 1, hopefully to be continued.  Some political examples, especially Trump-related; how can I resist?]

Finally I’ve gotten around to writing the remaining lengthy, cerebral post I’ve been wanting to get out of my system right from the get-go (really, it’s been in my system for a lot of my life).  I want to talk about object levels versus meta levels and Theory of Mind and everything that comes with it.  I’m worried that this post may become overly long and sprawling because it’s such a far-reaching topic in my view, but at least there’s one thing that makes life a lot easier here: a number of people whose blogs I follow have touched on this directly or indirectly in their writings many times.  By pointing attention to such things, they have done a lot of my work for me.  Also, I’m going to postpone a few of the ideas I have in mind to be put in a second post.

Here is a list (nowhere near exhaustive) of what I consider to be some of the more crucial posts of Alexander’s which address the general issue of Theory of Mind / Object-Meta Distinction in one way or another:

There are many, many more essays written by Alexander and others which apply these principles without quite so directly acknowledging them.  In particular, I’ve seen this from other prominent rationalist community members like Ozy (who runs the blog Thing of Things) as well as from Rob Bensinger, although off the top of my head I can’t produce any links since they both write prolifically in a lot of different places and I don’t have such a good memory for their individual articles and/or comments.  This post is my attempt to unify all of these points expressed by them and others into one concept.

But first, here is a series of example scenarios of a variety of flavors in order to motivate the idea.

I. A collection of very short stories

In recent years there have been a number of controversies surrounding high-profile individuals who hold views that are unsavory in some way or other and who were punished for saying those views, by losing their job for example, or just by not being allowed a microphone.  “A Comment I Posted on ‘What Would JT Do?'” addresses one of these cases, where Duck Dynasty star Phil Robertson was fired for voicing highly offensive views.  In it, Alexander expresses frustration with the network for suspending Robertson, arguing that regardless of what side we’re on, we should adhere to the norm of responding to views we don’t like with counterarguments rather than silencing.  Alexander later came to the defense of Brendan Eich when he was fired as CEO of Mozilla for similar reasons.  Much more recently, there has been a lot of discussion in the rationalist community about the forceful protests against the very presence of certain alt-right-ish speakers at universities.  Most seem to agree that regardless of how one feels about what we might call the “object-level situation” (Robertson or Eich or these speakers’ “object-level” positions that we don’t agree with), we should give priority to certain “meta-level” rules (e.g. allowing the opportunity for proponents of all beliefs to take the podium).  Although it’s clearly not quite that simple.  Because, waving aside the whole issue of the “free speech” defense being flawed when “freedom of speech” is understood in the most literal sense, there are some individuals, like possibly Milo Yiannopoulos, who have strayed beyond simply expressing their views into outright bullying.  There seems to be a fine line between speech that is offensive to some groups and actual threats to the safety of members of those groups.  So how exactly do we separate the “object level” from the “meta level” in situations like these?

There has been a particular theme in the debates I’ve (probably foolishly) gotten into with friends over a lot of things relating to the new presidential administration in America.  Many are arguing that we right-thinking Americans who are anti-Trump should refuse to acknowledge Mr. Trump as our president altogether.  They are more or less saying, as I understand it, that the horrid views he has Trumpeted were sufficient reason for various other authorities to have barred him from becoming president in the first place through some sort of brute force, to have refused to go to his inauguration, and get him impeached as soon as possible.  It seems pretty revealing to me that in the midst of some of these “not my president” arguments, the fact that Trump has almost certainly done many highly illegal things is thrown right in with policy positions such as being anti-abortion or (allegedly) anti-gay-rights.  While I agree that he’s “not my president” in the sense of not representing anything I stand for, I vehemently oppose the calls for immediate impeachment, as long as it’s motivated by pure principle rather than objective legal reasoning.  My main argument has a lot to do with how the other side will view what would look like purely political strong-arming in the highly unlikely event that such efforts actually succeed.  I don’t think anyone could completely deny this concern, but apparently I hold unusually strong convictions about the particular importance of considering how other people’s minds will process our behavior.

A few weeks ago I was asked an interesting question by a friend, also pertaining to the American political situation.  We were talking about speculations that some Trump campaign officials engaged in illicit communications with Russian agents, thus swinging the election in his favor.  My friend put forth the idea that if it is ever proven beyond reasonable doubt that Trump won the election through illegal means, then his executive orders should be considered illegal purely by virtue of the fact that he isn’t the rightful president.  I replied that I disagree with this proposal.  Trump’s action as president should be evaluated purely on their own merits (legal, moral, etc.), given the fact that he somehow got into the position he’s in.  In other words, I want our judgments of his becoming president and each thing he does as president to be evaluated as independently as possible.  That way, if we mess up our evaluation of one, this doesn’t affect how we react to the others.  Besides, I believe that both the travel ban and the disastrous first attempt at executing it (these two aspects can be judged separately as well!) were despicable and deserving of harsh judgment quite independently of whether Trump’s presidency itself is legitimate, so it just doesn’t seem fitting somehow for Trump to face legal consequences for the travel ban purely on the grounds that something unlawful was done in his presidential campaign months earlier.  Besides, again, one should consider what his supporters would make of us punishing him for a multitude of actions using the singular strategy of somehow convincing enough people that he never really got elected.

Now let’s move to personal drama of a sort that I’ve seen play out more times than I can count.  Suppose that Alice and Bob are in some kind of close relationship, and Alice gets upset with Bob about something and, let’s say, starts berating him in a tone that somehow goes over the line or with a lot of vulgar language or just generally in a borderline-verbally-abusive way.  Bob disagrees with the reasons why Alice is upset but focuses his resentment around the unacceptable way she talks to him when she’s angry.  Alice’s rebuttal is to point out that Bob yells at her in an equally unpleasant way when he’s upset with her for any reason, and she gives some past examples to lend evidence to the point.  Bob replies that those times were different because for X, Y, and Z reasons, he was right in those arguments and therefore justified in his nasty tone and diction, whereas today she’s wrong in her arguments and thus has no right to talk to him that way.  They are — or at least Bob is — conflating two issues here which should be separate discussions: the specific things they get into arguments about, and the way they talk to each other when they get angry about such things.

I know someone who has insisted multiple times that the word “insult” refers not merely to saying nasty things about someone, but to saying nasty things about someone that are unwarranted.  I have looked up the definition of the verb “to insult” in multiple dictionaries and have asked several others what they consider it to mean, and all evidence points to this person being wrong about the definition of “insult”.  But setting aside explicitly agreed-upon uses of words and the confusion that results from going against them, let’s grant that we can define terms in whatever way we choose as long as we’re consistent about how we use them.  To define “insult” as a valid description of a certain unpleasant behavior only as long as it is unjustified given that particular situation weakens one’s ability to separate a personal dispute into two disagreements (the particulars of why they are arguing, and the way they talk to each other when angry) as in the case of Alice and Bob above.  Insisting on such a definition of “insult” betrays a certain mindset.

(Interestingly, I was corrected on my use of “flattery” several times when I was younger, because I understood it to mean, well, more or less the opposite of “insult” regardless of sincerity or validity of the claim of the flatterer, while I was told that an effusive compliment doesn’t count as flattery if it’s actually obviously true.  This does seem more or less in keeping with dictionary definitions of “flattery”, although it looks slightly different from the “insult” situation situation since “to flatter” is meant to carry a connotation of insincerity.)

II. Separation of degrees

I believe that lying at the heart of all the situations described above there is a fundamental concept in common.  Sometimes we might talk about it in terms of “meta levels” and “object levels” (e.g. Alice and Bob have both an object-level disagreement but also have a problem on the meta-level about how they work through disagreements).  I’ve developed a habit of using this language quite a lot actually; I’m always telling myself that I’ll look back on this writing one day years from now and cringe thinking it looks sort of rhetorically immature to refer to “object” and “meta” things so often, but right now it still often seems like the best way to make my point.

At other times, we might speak of Theory of Mind as explained in some of the links I gave above (e.g. we have to operate on some consideration of the minds of Trump supporters).  I claim — and I hope to argue here at least in an indirect way — that both of these ways of analyzing disagreements point to the same underlying fallacy.

Out of all the rationality-flavored topics that I care about and have been writing essays on, this one lies closest to my heart.  I remember first feeling an awareness of the fact that I innately processed certain arguments in seemingly a very different way from the (equally intelligent and much more experienced) people around me at around the age of 12.  These disagreements were all of the flavor of the scenarios described above, where my frustration was with those who didn’t seem to realize that there are certain general rules which we all must agree to follow regardless of who is right or wrong in a particular dispute, because all parties are equally convinced that they’re right.  And that it’s no good to criticize a person you’re disagreeing with for not following some general rule on the grounds that they’re wrong about specifics when they don’t agree that they’re wrong on the specifics; in fact, it’s bound to further irritate them and push them away.  By the start of my teenage years, being bothered by this was already starting to feel like a major hangup that I was almost alone in suffering from, and part of me hoped and expected to outgrow it.  Yet here I am.  I can’t explain precisely why I’ve always felt as intensely about this as I do, although it’s clearly related in some way to the Principle of Charity, as in Scott Alexander’s framing in some of links above (or to my modified Principle of Empathy).

When I first ran into the rationalist community, perhaps the number one reason I started identifying with the individuals therein was that they all seem to intuitively grasp what I’m getting at here.  Sure, some might disagree with how I’m framing it in this essay (maybe because my framing is arguably not the most valid, but more likely due to lack of lucidity in expressing these concepts), but I never fail to feel assured that they get it.  Of course, “it” is rarely directly discussed in purely abstract terms rather than in the context of a particular concrete topic.  But like I said at the beginning, “it” exists as a thread running through the writing of Alexander, Ozy, and many others.

So is there a way of framing this in more definitive, purposeful language than “there’s some object- vs. meta-level thing or some Theory of Mind stuff going on here”?

Well, let’s start with Scott Alexander’s arguments on seeing issues in terms of object and meta levels in his writing which I linked to above, particularly in the “Slate Star Codex Political Spectrum Quiz”.  (Warning to anyone reading this who hasn’t gone to the link yet and is interested in taking the quiz: I’m about to “spoil” it.)  Here Alexander posits a series of questions, each of which describes a brief political conundrum and gives two choices as to how to proceed.  The catch is that he has cleverly paired the questions into couples which depict scenarios that are very similar on some “meta” level while (very roughly) the roles of “object” level political positions are switched (e.g. a question about a visit by the Dalai Lama being protested by a local Chinese minority is paired with a question about a memorial to southern Civil War veterans being protested by a local African-American minority).  The final score on the quiz is computed using a system that gives the quiz-taker one point for answering “the same way” on a pair of questions, thus displaying meta-level consistency.  The final evaluation is given as follows:

Score of 0 to 3: You are an Object-Level Thinker. You decide difficult cases by trying to find the solution that makes the side you like win and the side you dislike lose in that particular situation.

Score of 4 to 6: You are a Meta-Level Thinker. You decide difficult cases by trying to find general principles that can be applied evenhandedly regardless of which side you like or dislike.

Many have undoubtedly taken this, along with Alexander’s many other articles which seem to take the “meta-level side” (applying general principles across the board including when he doesn’t like the side whose rights he’s supporting), to imply that he favors meta-level thinking over object-level thinking and that we’re all “supposed to” score a 6 on the quiz.  I think I myself interpreted Alexander’s tone this way for a while.  Then I realized that this isn’t necessarily the right lesson to take away from it.  I can’t speak for Scott Alexander’s exact position here, but I do distinctly recall Rob Bensinger remarking in a different comment section that the Slate Star Codex Political Spectrum Quiz serves as an eloquent rebuttal to the attitude that one should always operate on the meta level.  I guess it depends on how one feels about the particular questions asked in the quiz, but I do have to agree that the correct message shouldn’t be to only think on the meta level.  Sometimes there are exceptional object-level circumstances which change the meta-level rules slightly.  For instance, if our Alice and Bob from above are a married couple who have agreed never to try not to let their voices rise above a certain volume when fighting with each other, then one of them might be justified in bending this meta-level rule just a bit in the fight that ensues after finding out that the other, for instance, just gambled away their entire joint life savings without asking, or has been cheating with seven other partners.

Also — this is a much more superficial objection that is easy to remedy — but of course it doesn’t make sense to consider any conflict to have exactly two levels, the “object” one and the “meta” one, because real conflicts are often complicated enough to involve many degrees of “meta-ness”.  For instance, two nations which are run on competing political philosophies (e.g. communism versus capitalism, in this case an object-level disagreement) may try to avoid war with each other in the absence of a particular type of threat or provocation (avoiding force is a meta-level rule), but in the case that they do declare war, they may try to follow international laws pertaining to conduct in war (as in the Geneva Conventions, meta-meta-level rules).  And after all, Alexander talks about an indefinite number of “steps” in the above-linked post on an “n-step theory of mind”.

So we should view any disagreement as likely having many layers of meta-ness, like an onion.  (One may consider the more “meta” layers as being closer to the center of the onion, but I sort of prefer to think of going outward as one gets more “meta”, since meta-level considerations should be a bit more all-encompassing).  And there is no hard-and-fast rule as to some level which will always take precedence over all others in judging any disagreement.  Instead, I think the correct message boils down to something even simpler: we should be aware that these different layers of a disagreement exist; and we should address them all separately in our arguments (even if they aren’t entirely independent).  For a long time, to myself I’ve been referring to this as “separating levels” or “separating layers” or even “separating degrees of meta”.

Where does Theory of Mind come in?  Well, in my experience the general way to fail at the goal I set out above involves disregard for the fact that others’ minds work independently from one’s own.  After all, the most common way to conflate these layers is to insist to one’s opponents that what should be uniform meta-rules need only be applied selectively, depending entirely on the object-level situation.  And it seems to me that the best way to justify this to oneself is to forget that one’s opponents hold differing convictions on the object-level situation which feel just as genuine as one’s own.  That’s basically, by definition, displaying a lack of Theory of Mind.

III. What goes wrong?

When claiming something as a fallacy, I believe it’s always good form to explain why the fallacy leads one astray as well as why people persist in it despite the fact that it leads one astray.  (It’s also nice to suggest a positive solution, but in this case, I don’t have any bright ideas beyond the self-evident “that mode of thinking is wrong, so don’t do that thing”.)

When thinking over why I don’t like it when people “conflate layers” of disagreements, I can’t help treating “reasons why this conflation is logically invalid” and “reasons why this conflation is bad rhetoric which will push people away rather than win arguments” as interchangeable.  Here are a couple of points which may fit one or both criteria.

1) Defending one’s stance on a meta-level issue using one’s stance on object-level issues won’t actually convince anyone not already on board.  If two parties disagree on the object-level issues (which I usually take to be the matter of disagreement which started the conflict in the first place), then for one party to defend their behavior of breaking some meta rule on the grounds that they are right on the object-level issue is a waste of breath.  From what I’ve seen (and from what I feel when this is done to me), it only makes the other party more angry and frustrated.  A valid argument uses premises that everyone involved agrees on and then uses those to convince one’s opponent of something they didn’t agree about.  An attempt at an argument based on a premise that one’s opponent never agreed on is bound to completely fail at accomplishing this.

2) Upholding a principle that belongs to one “layer” of the disagreement only on grounds of being in the right at another “layer” isn’t upholding the principle at all.  This can be seen in my second example with the Trump administration, where using the illegitimacy of Trump’s election to indict him for an executive order sort of implicitly excuses the illegality of the order itself.  Or, going back to our friends Bob and Alice, if Bob says, “I still think you’re wrong on the issue we were fighting about, but much worse than that, the names you called me are completely unacceptable!”, and Alice points out that Bob calls her similar names from time to time (perhaps even in that same fight), and Bob replies, “But I was justified in talking to you that way because there you were wrong!”… well then Bob is essentially implying that there’s nothing innately bad about calling someone those names at all.

Or to take a slightly more universal example, when a child lies to their parent about having done something wrong, the lesson handed to them is often something along the lines of “The naughty thing you did isn’t nearly as bad as the fact that you lied about it!”  But if the child soon afterwards catches their parents themselves lying to avoid getting into trouble for something they did, then justifying it on the grounds of not thinking their crime was actually bad, then there’s a risk of the child coming away very confused about the wrongness of lying.  And I’m not saying that there isn’t a circumstance where the parents’ words and actions might still be completely justified — there are some things that are against the (object-level) rules but which may still be morally right and okay to lie about (i.e. these “layers” do sometimes interfere with each other).  But a parent in this situation should at least be aware of the confusion that might result when laying down a blanket (meta-level) rule that lying is always wrong even when you’re trying to get out of trouble for doing something you feel was okay.

IV. Why do we go wrong?

I expect one could always cite the usual reason where people are prone to not thinking clearly, and to not having a strong Theory of Mind, especially when this allows for rhetoric which seems to work in their favor in the heat of the moment.  As for something more concrete, I think “conflating layers” mainly boils down to one major temptation.

Tying together two different issues in a disagreement allows one to justify oneself based on whichever one is easier to defend.  It’s easier to argue against homophobia itself than to argue purely on the meta-level that someone doesn’t deserve a public platform, so many don’t want to make the effort to separate the issue of the unsavory views of Robinson and Eich and their ilk from the issue of whether they have a right to keep their jobs despite their views.  If we obtain proof that the Trump campaign actually did clinch the election illegally, it will be easier to convince everyone that Trump isn’t the rightful president than to demonstrate that his travel ban was wrong, so a lot of us would feel inclined to use the illegitimacy of Trump’s presidency to condemn his attempt at the travel ban.  It may be easier during a particular argument to defend one’s object-level stance than to defend one’s use of nasty insults, so it’s tempting to define the term itself to depend on one’s rightness or wrongness on the object level.

In other words, while one can’t judge the layers of every argument completely independently, by treating them as all part of one singular issue of controversy it becomes way too easy to get away with all kinds of rhetorical shortcuts, so that one can defend one’s stance throughout the whole onion based only on the most easily justifiable layer.  It enables a bait-and-switch behavior which is similar to (or perhaps just a particular flavor of) the motte-and-bailey tactic.

…and actually, I believe all of this can be generalized slightly further, but I’ll save that for another post which (I hope) will appear here soon.