Disagreements are like onions II

(or “Why we shouldn’t put all our arguments in one rhetorical basket”)

[Content note: Pulse shooting, homophobia, Islamophobia, gun issues, fundamentalist Christianity, and, sadly, more Donald Trump. A bit on the disjointed side, and perhaps best read as three separate sub-essays.]

As the title suggests, this is a direct follow-up to my last post, “Disagreements are like onions“.

I. Separation, period

…What was I saying? Oh yes, I think all of this can be generalized a little further. In the other post, I suggested that we should make a priority of separating the object level from the meta level, or different “degrees of meta”, when analyzing a given disagreement. One obvious challenge that could be raised against this thesis is whether for any two “layers” of an argument one is really more “meta” than the other in some obvious way. For instance, in the example I gave in the other post about separating the possibility of Trump not being the rightful president from the possibility that his executive orders were wrong, it doesn’t seem that clear whether “legitimacy of election” is the meta-level issue while “morality/legality of executive action” is the object-level issue or vice versa. And it doesn’t really matter — the arguments I was giving were for separating the two, without necessarily applying any particular asymmetric treatment to them.

So the moral of the story as I see it is even a little simpler: just try not to conflate different layers. And now, “layers” is not meant to imply hierarchy with respect to any axis. Considering this in terms of object/meta level distinctions was useful, because it seemed to me that an awful lot of this conflation was between layers that differed in levels of meta-ness, but this isn’t always so.

When we strip away all the talk of object and meta levels and just talk about “levels”, the primary reason for the fallacy becomes even more apparent. A person who is defending a position with many levels is often tempted to throw all of their eggs into the basket of their favorite one, which is often the one which feels easiest to defend.

Although this behavior seems extremely common and I’m sure I’ve been guilty of it plenty of times without realizing it, some of the most blatant (and kind of hilarious) examples of it which come most easily to my mind involve fundamentalist Christian apologetics of the most extreme and crackpotty kind. For instance, I remember hearing an open-air preacher on a university campus who was carrying on, in his slow, booming voice, by giving a rendition all of what he considered to be the principal sinful behaviors of us students. It quickly became clear that homosexuality held a position of special status among this horde of evil lifestyle choices, because apparently every single other one was a special case of it. “Extramarital relations is what happens when you give in to your baser passions, so that is a form of homosexuality. Same with pot-smoking, so that is a form of homosexuality. Social Darwinism is also a form of homosexuality. Being a Democrat is a form of homosexuality. Mormonism is a form of homosexuality…” And so on and so on. Now the issue of same-sex attraction isn’t in any obvious way more or less “meta” than questions surrounding these other supposed evils. But it was certainly a hot-button issue at the time as well as evidently this preacher’s specialty, so it was convenient for him to frame absolutely every idea he wanted to attack in terms of homosexuality.

(On a purely comical note, I’m reminded of a Canadian friend who facetiously explained to me that where he grew up, not only do bears represent the epitome of danger, but every threatening thing up there is in fact, at least in some indirect way, a form of bear-ness. As far as I’m concerned, this assertion is really no less ridiculous than that of the evangelical preacher above.)

And while extreme fundamentalist Christians are on my mind, does anyone remember the young-earth creationist Kent “Dr. Dino” Hovind?  His “doctoral dissertation” is available in pdf format online and is another quintessential example of bundling all of one’s ideological opposition into one narrow category.  Apparently, every non-Christian idea that Hovind disliked was yet another face of the “religion of evolution”, throughout all 6,000 years of our world’s existence, from Cain and Abel to the ancient Greek philosophers to Galileo to the origins of Communism.

But atheists have been known to engage in this kind of thing as well.  Around 2012, there was an attempt made by part of the atheist community to splinter off into a group called Atheism Plus, comprised of atheists who wanted to stand up for certain specific humanitarian values outside of the very basic brand of humanism that generally goes hand in hand with a positive lack of religious belief.  Although this new movement was advertised by luminaries such as Dr. Richard Carrier as being based simply upon the sentiment that as a group they should stand up against bad behavior on the part of members of the mainstream atheist community, it seemed clear pretty early on that the intent was to bind atheism together with the beliefs of the then-emerging online social justice movement. I can’t help but feel that by attempting to make such object-level beliefs an inherent part of what it meant to be an atheist, the advocates of Atheism Plus were muddying the distinction between the core of a skeptical belief system and adherence to the particular social and political ideas that they liked. I considered the attitude that an atheist committed to social justice shouldn’t be willing to march for secularist causes alongside other atheists who didn’t see exactly eye-to-eye with them on all social issues to be divisive, and I feared that it would weaken both the battle for freedom from religion and the battle for social justice. And it seemed clear that a lot of this arose from a desire (conscious or subconscious) to sneak in a lot of specific tricky, controversial views under the banner of general skepticism, which is a much more easily defensible value at least in a room of committed nonbelievers.

One Atheism-Plus-related essay that stuck in my mind was this manifesto (long, but altogether quite an insightful and relevant read for this discussion, although ultimately I disagree with it).  Here is a particular excerpt whose essence stayed with me years later:

I saw in skepticism a great deal of potential, too. It was a community that had until recently been very much based in the “hard” sciences and in addressing the more objectively falisfiable beliefs that people held, like cryptids, UFOs, alt-med and paranormal phenomena. But I saw absolutely no reason that skepticism couldn’t be compatible with the social justice issues I also cared about, like feminism. I saw in feminism a lot of repeated mistakes made due to a lack of critical inquiry and self-reflection, and rejection of the value of science and that kind of critical thought, and I also believed that a whole lot of what feminism, and other social justice movements, were trying to address was very similar kinds of irrational beliefs and assumptions, stemming from similar human needs and limitations as beliefs in the paranormal. Misogyny, sexism, cissexism, gender binarism, racism, able-ism… these things didn’t seem meaningfully different to me from pseudo-science, new age, woo, religious faith, occultism or the paranormal. All were human beings going for easy, intuitive conclusions based on what they most wanted or needed to believe, and on what most seemed to them to be true, without that moment of doubt, hesitation and humility that skepticism encourages.

What I felt skepticism could offer all of us, in enabling us to cope with our faulty perceptions and thought, was a certain kind of agency. An ability to make a choice about what we believe instead of just going with the comfortable and most apparent truthiness. And in allowing us that agency, in allowing us that choice… we could make the right choices. Instead of settling for what we are, how we tend to see, think and believe… we could try to be something better. We could look to what we could be, to how we could see, think and believe.

In other words, the writer, Natalie Reed, saw certain social justice stances as following from the same skeptical mindset from which atheism also follows and therefore as a necessary biproduct of performing atheism “the right way”. To me, this seemed in tension with what she said in the very next paragraph about freedom and ability to choose beliefs; clearly, Reed saw only one right answer to certain non-deity-related questions and was frustrated that the atheist community as a whole was failing to embrace it.  Here she didn’t come across to me as possessing the Theory of Mind to see that the skepticism that might lead others to non-belief in gods might not lead to non-belief in all of the other things she was skeptical of, or that other skeptics might even consider parts of her socially liberal ideology to be examples of “truthiness” which deserve more skepticism.

Anyway, to leave the arena of religion for more mainstream politics, I’ve also seen left-wing rhetoric along the lines of “being pro-gun is wrong because if you think about it, the presence of guns stifles free speech, which is one of the pillars of our democracy”.  To me this argument appears to be reaching pretty far by making a pretty indirect connection between gun control and a more popular and easier-to-defend American value.  I’m sure that this kind of argumentation is pervasive in right-wing spaces as well — probably lots of bending-over-backwards interpretations of various proposals as boiling down to “more government control” or something like that — but having had very little exposure to those spaces during the last decade, I don’t really know. I see no reason not to suppose that it is present in most ideological communities.

II. Another reason not to draft all arguments as soldiers

In this more general context of separating layers, my point (2) under section III of the last essay (“Upholding a principle that belongs to one ‘layer’ of the disagreement only on grounds of being in the right at another ‘layer’ isn’t upholding the principle at all”) reminds me a lot of something I wrote on my tumblelog (my Tumblr blog) back last August.  I link to it here and insert a more up-to-date revision of it as follows.

One major thrust of the rationalist approach to winning arguments is to avoid the “arguments are soldiers” mentality — that is, the attitude that every argument for one’s side of a debate, whether good or bad, is an ideological weapon and all must be deployed if one is to win on the political battlefield.  The argument against using arguments as weapons is itself a call for separating the object from the meta, but I see another objection: namely, that the use of “arguments as soldiers” oftentimes implicitly weakens the good arguments for one’s own side.

To give an example of this, I’m afraid I’m going to dredge up a horrible event from last summer: the Pulse shooting (~50 people killed at an Orlando nightclub).  I was traveling at the time it happened and wasn’t able to research all the updates on what was or wasn’t known about the killer hour by hour, so for a few days I was relying on what was popping up on my Facebook newsfeed.  As tragedies go, this one was especially tricky to respond to rhetorically because in the immediate aftermath, as there were so many potential political elements of it pertaining to all sides: in particular, Islam, homophobia, and guns.

Within a day, my Facebook was blowing up with articles giving particular views of the very sparse information we had on the killer at that moment.  The main two groups contributing to the political discussion seemed to be liberals who wanted to play up his homophobia and conservatives (as well as a few anti-Islam liberals / libertarians) who wanted to play up his Muslim-ness.  At the time, judging from preliminary reports I saw trickling in, the levels of both of these traits were unclear.  There were rumors in the early hours of the aftermath that he himself was a regular at the club, and that he had a gay dating app on his phone.  Meanwhile, while it was clear that he was a Muslim, he was raised in America, it wasn’t so clear exactly how strong his ties to ISIS and “radical Islam” were.

I’m going to focus now on the emphasis on the killer’s homophobia, mainly because the people pushing it were the ones on “my side” of most issues and vastly outnumbered the others anyway.  Now there’s nothing wrong in the fact that people were focusing on his homophobia.  After all, it’s extremely important to investigate exactly why someone would perform such an evil act, and it’s completely appropriate for us to feel outraged if part of the motive came from such vile bigotry.  And in fact, it looks like these people turned out to be right: he did choose a gay nightclub out of a desire to attack gays, and he certainly wasn’t a regular or openly gay, etc.  But suppose the evidence had come out differently: would it weaken the gay rights cause in any way?  It would not make gay rights one iota less valid if this guy had shot up a gay club out of pure sadism rather than directed bigotry.  I guess maybe it would make the gay rights cause seem an iota or two less worthwhile, because some of the practical value of a cause lies in how many lives will be affected by it (there’s some importance in demonstrating that homophobia kills).  But I’m going to suggest that even that is only affected a tiny bit, since those 100 lives are still a pretty small fraction of all those who have been killed for being somewhere on the queer spectrum.  My point is not that I was bothered by so many people drawing attention to it (after all, as I have said, this was absolutely appropriate and essential), but that there was this almost-desperate underlying tone implying of “see, this is why homophobia is bad, and this is why gay people deserve equal rights”.  I know that wasn’t actually what anyone was saying or probably even thinking, but that tone does in my opinion sort of communicate an attitude that the validity of gay rights is conditional on exactly which tragedies have arisen from not acknowledging them: if new evidence were to come in showing that the killer wasn’t anti-gay, then where would that leave us?

This reminds me of the common tactic that atheists use in debate where they make a big point of how many lives have been destroyed in the name of religion, implying that this is why religion is incorrect.  I’ve actually seen Richard Dawkins open a debate on the existence of God with this strategy, then backtrack when he sees his debate opponent is formidable at rebutting that point, saying, “But counting up the number of lives lost due to a particular ideology doesn’t really matter anyway; all I care about is which belief system is true!”  (Unfortunately I can’t recall which debate this was, but I wouldn’t be surprised if it happened more than once.)  Well then, Dr. Dawkins, why didn’t you start by arguing that way in the first place?  In this failed rhetorical maneuver, Dawkins has actually damaged the argument against religion as being antithetical to the objective pursuit of truth by implicitly making this point of view seem delicate, as thought it needed to be backed up by statistics on the number of deaths resulting from the failure to choose secularism.

Or, to give another example from the 2016 election campaign, I noticed that many people seemed very anxious to show that Donald Trump was never a competent businessman at all, as though that was the main factor relevant to his candidacy.  As far as I know, a lot of the memes supposedly demonstrating that he hasn’t actually done anything impressive with money were misleading, but I couldn’t actually care less either way because I saw much, much more crucial indications that he was not fit to be president.  I realized that there was some sense in trying to rebut the supporters of Trump who painted him as a savvy businessman, but displaying it in the front and center of the anti-Trump case seemed to me like a confusion of priorities and actually sort of validated the pro-Trump contention that being successful at business qualifies someone for the presidency.

To summarize, when arguments are used as soldiers in this way, it not only often leads to bad arguments being used, but it weakens other, extremely valid points supporting on the same side.  Then if the bad arguments are eventually knocked down, there’s not quite as much left on display in support our cause as there would have been if we had stuck to emphasizing the core reasoning behind it in the first place.

In other words, putting all one’s rhetorical eggs in a single basket (i.e. a particular aspect of one’s worldview) is a risky business.  At worst, the basket will break and the rhetorician will lose the whole debate despite the fact that some of their other stances were valid.  And at best, the single idea they’re classifying everything else under will come out looking correct, but sneaking all the other ideas in under it might come across as shady and underhanded, and those other ideas might not get the acknowledgment or credit they deserve.

III. A postscript on the March for Science

Tomorrow a lot of my American friends will be participating in a march which is purportedly a protest against the new presidential administration’s blatant disregard for some of the less popular findings of science in favor of pseudoscience and general “truthiness”.  While I am all for the original cause of this demonstration, I tend to have misgivings about protests in general.  A lot of these misgivings have something to do with what I’ve been discussing above: it seems that such protests are often billed as being about something at least sort of specific, but then a bunch of other statistically-correlated beliefs wind up getting lumped in with the original cause.  This appeared to be the case for instance with the American “Occupy Wall Street / 99 Percent” movement in the earlier part of this decade, for instance (inasmuch as that movement started out with any specific position in the first place).  It was also apparent at the Women’s March back in January (hello, intersectional feminism!).  I’m not saying that I was actually against any of these demonstrations, and in fact I think that at least some (such as the Women’s March) had wonderful effects.  But I’m bothered by the fact that such protests have a tendency to devolve into a shouting platform that enforces the clustering of a whole bundle of political positions rather than a unified, focused, and concretely-reasoned push for a particular goal.  I’m a member of a Facebook group dedicated to the March For Science, and I’ve certainly already seen a lot of posts there championing areas of science, or even tangential science-related causes like better representation of minorities, etc., which don’t seem directly relevant to the main crises at hand.

That said, the theme of this particular event, Science, is itself of interest when considering the issue of “separating layers”, because the spirit of Science seems in a certain sense to uphold the opposite value to the one I’ve been preaching here.  That is, the idea behind Science is that we are trying to explain empirical phenomena in terms of the most elegant possible models based on natural laws which apply universally.  In other words, Science is on some level all about not considering different questions independently.  For instance, it is often pointed out that to be consistent in one’s denial of biological evolution, one must also deny the validity of a wide range of scientific areas including geology and particle physics.  So I can’t really fault all the posts I see along the lines of “I march because without science we wouldn’t have the medical technology to treat my leukemia!”, even though it would be unfair to directly imply that support for the strains of pseudoscience peddled by the current administration automatically implies opposition to improving the lives of leukemia patients.  After all, the same respect for the scientific process that has led to so many widely celebrated inventions and breakthroughs ought to be applied when it comes to more politically controversial scientific findings as well.

Anyway, it will be interesting to see exactly how tomorrow’s event shapes up.  I guess that as far as my insistence on “separating layers” applies to this situation, I would say that it’s important to realize that it is possible for intellectually honest people to disagree with the scientific consensus on some (object-level) issues without necessarily opposing the (meta-level) values of the scientific process itself.  However, those of us who feel worried about what appears to be a pervasive disregard for science, who feel that people who hold to popular “truthy” beliefs not supported by scientists while otherwise tacitly supporting the scientific process are oftentimes operating on an inconsistent belief system, are certainly quite justified in wanting to engage in peaceful demonstrations against these worrisome modes of thinking.  Or at least as justified as I am in wanting to write long, rambling blog posts about what I consider to be worrisome modes of thinking.

18033965_10213215715533949_4007728607424155360_n(credit to Kendra Hamilton on Facebook)

Disagreements are like onions

[Content note: this is another attempt to convey one of those fundamental ideas which I feel strongly about deep down but is still a little hard to communicate, so I once again erred on the side of long and dry.  Part 1, hopefully to be continued.  Some political examples, especially Trump-related; how can I resist?]

Finally I’ve gotten around to writing the remaining lengthy, cerebral post I’ve been wanting to get out of my system right from the get-go (really, it’s been in my system for a lot of my life).  I want to talk about object levels versus meta levels and Theory of Mind and everything that comes with it.  I’m worried that this post may become overly long and sprawling because it’s such a far-reaching topic in my view, but at least there’s one thing that makes life a lot easier here: a number of people whose blogs I follow have touched on this directly or indirectly in their writings many times.  By pointing attention to such things, they have done a lot of my work for me.  Also, I’m going to postpone a few of the ideas I have in mind to be put in a second post.

Here is a list (nowhere near exhaustive) of what I consider to be some of the more crucial posts of Alexander’s which address the general issue of Theory of Mind / Object-Meta Distinction in one way or another:

There are many, many more essays written by Alexander and others which apply these principles without quite so directly acknowledging them.  In particular, I’ve seen this from other prominent rationalist community members like Ozy (who runs the blog Thing of Things) as well as from Rob Bensinger, although off the top of my head I can’t produce any links since they both write prolifically in a lot of different places and I don’t have such a good memory for their individual articles and/or comments.  This post is my attempt to unify all of these points expressed by them and others into one concept.

But first, here is a series of example scenarios of a variety of flavors in order to motivate the idea.

I. A collection of very short stories

In recent years there have been a number of controversies surrounding high-profile individuals who hold views that are unsavory in some way or other and who were punished for saying those views, by losing their job for example, or just by not being allowed a microphone.  “A Comment I Posted on ‘What Would JT Do?'” addresses one of these cases, where Duck Dynasty star Phil Robertson was fired for voicing highly offensive views.  In it, Alexander expresses frustration with the network for suspending Robertson, arguing that regardless of what side we’re on, we should adhere to the norm of responding to views we don’t like with counterarguments rather than silencing.  Alexander later came to the defense of Brendan Eich when he was fired as CEO of Mozilla for similar reasons.  Much more recently, there has been a lot of discussion in the rationalist community about the forceful protests against the very presence of certain alt-right-ish speakers at universities.  Most seem to agree that regardless of how one feels about what we might call the “object-level situation” (Robertson or Eich or these speakers’ “object-level” positions that we don’t agree with), we should give priority to certain “meta-level” rules (e.g. allowing the opportunity for proponents of all beliefs to take the podium).  Although it’s clearly not quite that simple.  Because, waving aside the whole issue of the “free speech” defense being flawed when “freedom of speech” is understood in the most literal sense, there are some individuals, like possibly Milo Yiannopoulos, who have strayed beyond simply expressing their views into outright bullying.  There seems to be a fine line between speech that is offensive to some groups and actual threats to the safety of members of those groups.  So how exactly do we separate the “object level” from the “meta level” in situations like these?

There has been a particular theme in the debates I’ve (probably foolishly) gotten into with friends over a lot of things relating to the new presidential administration in America.  Many are arguing that we right-thinking Americans who are anti-Trump should refuse to acknowledge Mr. Trump as our president altogether.  They are more or less saying, as I understand it, that the horrid views he has Trumpeted were sufficient reason for various other authorities to have barred him from becoming president in the first place through some sort of brute force, to have refused to go to his inauguration, and get him impeached as soon as possible.  It seems pretty revealing to me that in the midst of some of these “not my president” arguments, the fact that Trump has almost certainly done many highly illegal things is thrown right in with policy positions such as being anti-abortion or (allegedly) anti-gay-rights.  While I agree that he’s “not my president” in the sense of not representing anything I stand for, I vehemently oppose the calls for immediate impeachment, as long as it’s motivated by pure principle rather than objective legal reasoning.  My main argument has a lot to do with how the other side will view what would look like purely political strong-arming in the highly unlikely event that such efforts actually succeed.  I don’t think anyone could completely deny this concern, but apparently I hold unusually strong convictions about the particular importance of considering how other people’s minds will process our behavior.

A few weeks ago I was asked an interesting question by a friend, also pertaining to the American political situation.  We were talking about speculations that some Trump campaign officials engaged in illicit communications with Russian agents, thus swinging the election in his favor.  My friend put forth the idea that if it is ever proven beyond reasonable doubt that Trump won the election through illegal means, then his executive orders should be considered illegal purely by virtue of the fact that he isn’t the rightful president.  I replied that I disagree with this proposal.  Trump’s action as president should be evaluated purely on their own merits (legal, moral, etc.), given the fact that he somehow got into the position he’s in.  In other words, I want our judgments of his becoming president and each thing he does as president to be evaluated as independently as possible.  That way, if we mess up our evaluation of one, this doesn’t affect how we react to the others.  Besides, I believe that both the travel ban and the disastrous first attempt at executing it (these two aspects can be judged separately as well!) were despicable and deserving of harsh judgment quite independently of whether Trump’s presidency itself is legitimate, so it just doesn’t seem fitting somehow for Trump to face legal consequences for the travel ban purely on the grounds that something unlawful was done in his presidential campaign months earlier.  Besides, again, one should consider what his supporters would make of us punishing him for a multitude of actions using the singular strategy of somehow convincing enough people that he never really got elected.

Now let’s move to personal drama of a sort that I’ve seen play out more times than I can count.  Suppose that Alice and Bob are in some kind of close relationship, and Alice gets upset with Bob about something and, let’s say, starts berating him in a tone that somehow goes over the line or with a lot of vulgar language or just generally in a borderline-verbally-abusive way.  Bob disagrees with the reasons why Alice is upset but focuses his resentment around the unacceptable way she talks to him when she’s angry.  Alice’s rebuttal is to point out that Bob yells at her in an equally unpleasant way when he’s upset with her for any reason, and she gives some past examples to lend evidence to the point.  Bob replies that those times were different because for X, Y, and Z reasons, he was right in those arguments and therefore justified in his nasty tone and diction, whereas today she’s wrong in her arguments and thus has no right to talk to him that way.  They are — or at least Bob is — conflating two issues here which should be separate discussions: the specific things they get into arguments about, and the way they talk to each other when they get angry about such things.

I know someone who has insisted multiple times that the word “insult” refers not merely to saying nasty things about someone, but to saying nasty things about someone that are unwarranted.  I have looked up the definition of the verb “to insult” in multiple dictionaries and have asked several others what they consider it to mean, and all evidence points to this person being wrong about the definition of “insult”.  But setting aside explicitly agreed-upon uses of words and the confusion that results from going against them, let’s grant that we can define terms in whatever way we choose as long as we’re consistent about how we use them.  To define “insult” as a valid description of a certain unpleasant behavior only as long as it is unjustified given that particular situation weakens one’s ability to separate a personal dispute into two disagreements (the particulars of why they are arguing, and the way they talk to each other when angry) as in the case of Alice and Bob above.  Insisting on such a definition of “insult” betrays a certain mindset.

(Interestingly, I was corrected on my use of “flattery” several times when I was younger, because I understood it to mean, well, more or less the opposite of “insult” regardless of sincerity or validity of the claim of the flatterer, while I was told that an effusive compliment doesn’t count as flattery if it’s actually obviously true.  This does seem more or less in keeping with dictionary definitions of “flattery”, although it looks slightly different from the “insult” situation situation since “to flatter” is meant to carry a connotation of insincerity.)

II. Separation of degrees

I believe that lying at the heart of all the situations described above there is a fundamental concept in common.  Sometimes we might talk about it in terms of “meta levels” and “object levels” (e.g. Alice and Bob have both an object-level disagreement but also have a problem on the meta-level about how they work through disagreements).  I’ve developed a habit of using this language quite a lot actually; I’m always telling myself that I’ll look back on this writing one day years from now and cringe thinking it looks sort of rhetorically immature to refer to “object” and “meta” things so often, but right now it still often seems like the best way to make my point.

At other times, we might speak of Theory of Mind as explained in some of the links I gave above (e.g. we have to operate on some consideration of the minds of Trump supporters).  I claim — and I hope to argue here at least in an indirect way — that both of these ways of analyzing disagreements point to the same underlying fallacy.

Out of all the rationality-flavored topics that I care about and have been writing essays on, this one lies closest to my heart.  I remember first feeling an awareness of the fact that I innately processed certain arguments in seemingly a very different way from the (equally intelligent and much more experienced) people around me at around the age of 12.  These disagreements were all of the flavor of the scenarios described above, where my frustration was with those who didn’t seem to realize that there are certain general rules which we all must agree to follow regardless of who is right or wrong in a particular dispute, because all parties are equally convinced that they’re right.  And that it’s no good to criticize a person you’re disagreeing with for not following some general rule on the grounds that they’re wrong about specifics when they don’t agree that they’re wrong on the specifics; in fact, it’s bound to further irritate them and push them away.  By the start of my teenage years, being bothered by this was already starting to feel like a major hangup that I was almost alone in suffering from, and part of me hoped and expected to outgrow it.  Yet here I am.  I can’t explain precisely why I’ve always felt as intensely about this as I do, although it’s clearly related in some way to the Principle of Charity, as in Scott Alexander’s framing in some of links above (or to my modified Principle of Empathy).

When I first ran into the rationalist community, perhaps the number one reason I started identifying with the individuals therein was that they all seem to intuitively grasp what I’m getting at here.  Sure, some might disagree with how I’m framing it in this essay (maybe because my framing is arguably not the most valid, but more likely due to lack of lucidity in expressing these concepts), but I never fail to feel assured that they get it.  Of course, “it” is rarely directly discussed in purely abstract terms rather than in the context of a particular concrete topic.  But like I said at the beginning, “it” exists as a thread running through the writing of Alexander, Ozy, and many others.

So is there a way of framing this in more definitive, purposeful language than “there’s some object- vs. meta-level thing or some Theory of Mind stuff going on here”?

Well, let’s start with Scott Alexander’s arguments on seeing issues in terms of object and meta levels in his writing which I linked to above, particularly in the “Slate Star Codex Political Spectrum Quiz”.  (Warning to anyone reading this who hasn’t gone to the link yet and is interested in taking the quiz: I’m about to “spoil” it.)  Here Alexander posits a series of questions, each of which describes a brief political conundrum and gives two choices as to how to proceed.  The catch is that he has cleverly paired the questions into couples which depict scenarios that are very similar on some “meta” level while (very roughly) the roles of “object” level political positions are switched (e.g. a question about a visit by the Dalai Lama being protested by a local Chinese minority is paired with a question about a memorial to southern Civil War veterans being protested by a local African-American minority).  The final score on the quiz is computed using a system that gives the quiz-taker one point for answering “the same way” on a pair of questions, thus displaying meta-level consistency.  The final evaluation is given as follows:

Score of 0 to 3: You are an Object-Level Thinker. You decide difficult cases by trying to find the solution that makes the side you like win and the side you dislike lose in that particular situation.

Score of 4 to 6: You are a Meta-Level Thinker. You decide difficult cases by trying to find general principles that can be applied evenhandedly regardless of which side you like or dislike.

Many have undoubtedly taken this, along with Alexander’s many other articles which seem to take the “meta-level side” (applying general principles across the board including when he doesn’t like the side whose rights he’s supporting), to imply that he favors meta-level thinking over object-level thinking and that we’re all “supposed to” score a 6 on the quiz.  I think I myself interpreted Alexander’s tone this way for a while.  Then I realized that this isn’t necessarily the right lesson to take away from it.  I can’t speak for Scott Alexander’s exact position here, but I do distinctly recall Rob Bensinger remarking in a different comment section that the Slate Star Codex Political Spectrum Quiz serves as an eloquent rebuttal to the attitude that one should always operate on the meta level.  I guess it depends on how one feels about the particular questions asked in the quiz, but I do have to agree that the correct message shouldn’t be to only think on the meta level.  Sometimes there are exceptional object-level circumstances which change the meta-level rules slightly.  For instance, if our Alice and Bob from above are a married couple who have agreed never to try not to let their voices rise above a certain volume when fighting with each other, then one of them might be justified in bending this meta-level rule just a bit in the fight that ensues after finding out that the other, for instance, just gambled away their entire joint life savings without asking, or has been cheating with seven other partners.

Also — this is a much more superficial objection that is easy to remedy — but of course it doesn’t make sense to consider any conflict to have exactly two levels, the “object” one and the “meta” one, because real conflicts are often complicated enough to involve many degrees of “meta-ness”.  For instance, two nations which are run on competing political philosophies (e.g. communism versus capitalism, in this case an object-level disagreement) may try to avoid war with each other in the absence of a particular type of threat or provocation (avoiding force is a meta-level rule), but in the case that they do declare war, they may try to follow international laws pertaining to conduct in war (as in the Geneva Conventions, meta-meta-level rules).  And after all, Alexander talks about an indefinite number of “steps” in the above-linked post on an “n-step theory of mind”.

So we should view any disagreement as likely having many layers of meta-ness, like an onion.  (One may consider the more “meta” layers as being closer to the center of the onion, but I sort of prefer to think of going outward as one gets more “meta”, since meta-level considerations should be a bit more all-encompassing).  And there is no hard-and-fast rule as to some level which will always take precedence over all others in judging any disagreement.  Instead, I think the correct message boils down to something even simpler: we should be aware that these different layers of a disagreement exist; and we should address them all separately in our arguments (even if they aren’t entirely independent).  For a long time, to myself I’ve been referring to this as “separating levels” or “separating layers” or even “separating degrees of meta”.

Where does Theory of Mind come in?  Well, in my experience the general way to fail at the goal I set out above involves disregard for the fact that others’ minds work independently from one’s own.  After all, the most common way to conflate these layers is to insist to one’s opponents that what should be uniform meta-rules need only be applied selectively, depending entirely on the object-level situation.  And it seems to me that the best way to justify this to oneself is to forget that one’s opponents hold differing convictions on the object-level situation which feel just as genuine as one’s own.  That’s basically, by definition, displaying a lack of Theory of Mind.

III. What goes wrong?

When claiming something as a fallacy, I believe it’s always good form to explain why the fallacy leads one astray as well as why people persist in it despite the fact that it leads one astray.  (It’s also nice to suggest a positive solution, but in this case, I don’t have any bright ideas beyond the self-evident “that mode of thinking is wrong, so don’t do that thing”.)

When thinking over why I don’t like it when people “conflate layers” of disagreements, I can’t help treating “reasons why this conflation is logically invalid” and “reasons why this conflation is bad rhetoric which will push people away rather than win arguments” as interchangeable.  Here are a couple of points which may fit one or both criteria.

1) Defending one’s stance on a meta-level issue using one’s stance on object-level issues won’t actually convince anyone not already on board.  If two parties disagree on the object-level issues (which I usually take to be the matter of disagreement which started the conflict in the first place), then for one party to defend their behavior of breaking some meta rule on the grounds that they are right on the object-level issue is a waste of breath.  From what I’ve seen (and from what I feel when this is done to me), it only makes the other party more angry and frustrated.  A valid argument uses premises that everyone involved agrees on and then uses those to convince one’s opponent of something they didn’t agree about.  An attempt at an argument based on a premise that one’s opponent never agreed on is bound to completely fail at accomplishing this.

2) Upholding a principle that belongs to one “layer” of the disagreement only on grounds of being in the right at another “layer” isn’t upholding the principle at all.  This can be seen in my second example with the Trump administration, where using the illegitimacy of Trump’s election to indict him for an executive order sort of implicitly excuses the illegality of the order itself.  Or, going back to our friends Bob and Alice, if Bob says, “I still think you’re wrong on the issue we were fighting about, but much worse than that, the names you called me are completely unacceptable!”, and Alice points out that Bob calls her similar names from time to time (perhaps even in that same fight), and Bob replies, “But I was justified in talking to you that way because there you were wrong!”… well then Bob is essentially implying that there’s nothing innately bad about calling someone those names at all.

Or to take a slightly more universal example, when a child lies to their parent about having done something wrong, the lesson handed to them is often something along the lines of “The naughty thing you did isn’t nearly as bad as the fact that you lied about it!”  But if the child soon afterwards catches their parents themselves lying to avoid getting into trouble for something they did, then justifying it on the grounds of not thinking their crime was actually bad, then there’s a risk of the child coming away very confused about the wrongness of lying.  And I’m not saying that there isn’t a circumstance where the parents’ words and actions might still be completely justified — there are some things that are against the (object-level) rules but which may still be morally right and okay to lie about (i.e. these “layers” do sometimes interfere with each other).  But a parent in this situation should at least be aware of the confusion that might result when laying down a blanket (meta-level) rule that lying is always wrong even when you’re trying to get out of trouble for doing something you feel was okay.

IV. Why do we go wrong?

I expect one could always cite the usual reason where people are prone to not thinking clearly, and to not having a strong Theory of Mind, especially when this allows for rhetoric which seems to work in their favor in the heat of the moment.  As for something more concrete, I think “conflating layers” mainly boils down to one major temptation.

Tying together two different issues in a disagreement allows one to justify oneself based on whichever one is easier to defend.  It’s easier to argue against homophobia itself than to argue purely on the meta-level that someone doesn’t deserve a public platform, so many don’t want to make the effort to separate the issue of the unsavory views of Robinson and Eich and their ilk from the issue of whether they have a right to keep their jobs despite their views.  If we obtain proof that the Trump campaign actually did clinch the election illegally, it will be easier to convince everyone that Trump isn’t the rightful president than to demonstrate that his travel ban was wrong, so a lot of us would feel inclined to use the illegitimacy of Trump’s presidency to condemn his attempt at the travel ban.  It may be easier during a particular argument to defend one’s object-level stance than to defend one’s use of nasty insults, so it’s tempting to define the term itself to depend on one’s rightness or wrongness on the object level.

In other words, while one can’t judge the layers of every argument completely independently, by treating them as all part of one singular issue of controversy it becomes way too easy to get away with all kinds of rhetorical shortcuts, so that one can defend one’s stance throughout the whole onion based only on the most easily justifiable layer.  It enables a bait-and-switch behavior which is similar to (or perhaps just a particular flavor of) the motte-and-bailey tactic.

…and actually, I believe all of this can be generalized slightly further, but I’ll save that for another post which (I hope) will appear here soon.

Agency does not imply moral responsibility [the brief version]

[Content note: uncharacteristically short and sweet.]

The object of this very short essay is to concisely state a proposition and brief argument which I refer to frequently but was lacking a suitable post to link to.  This is one of the central points of my longest essay, “Multivariate Utilitarianism“, but it’s buried most of the way down, and it seems less than ideal to link to “Multivariate Utilitarianism” each time I want to make an off-hand allusion to the idea.

Here is how I would briefly summarize it, using the template of a mathematical paper (even though the content won’t be at all rigorous, I’m afraid).

Proposition. The fact that an agent X acts in a way that results in some event A which increases/decreases utility does not imply that X bears the moral responsibility attached to this change in utility.  In other words, agency does not imply moral responsibility.

Proof (sketch). One way to see that agency cannot imply moral responsibility in a situation where multiple agents are involved is through the following simple argument by contradiction.  Suppose there are at least two agents X and Y whose actions bring about some event that creates some change in utility.  If X had acted otherwise, then this change in utility wouldn’t have happened, so if we assume that agency implies moral responsibility, then X bears responsibility (credit or blame) proportional to the change in utility.  By symmetry, we see that Y also bears the same responsibility.  But both cannot be fully responsible for the same change in utility — or at least, that seems absurd.
One naïve approach to remedy this would be to divide the moral responsibility equally between all agents involved.  However, working with actual examples shows that this quickly breaks down into another absurd situation, mainly because the roles of all parties creating an event are not all equally significant.  We are forced to conclude that there is no canonical algorithm for assigning moral responsibility to each agent, which in particular implies the statement of the proposition.

Remark. (a) The above argument seems quite obvious (at least when stated in more everyday language) but is often obscured by the fact that in situations with multiple agents, usually only one agent is being discussed at a particular time.  That is, people say “If X had acted differently, A wouldn’t have happened; therefore, X bears moral responsibility for A” without every mentioning Y.
(b) A lot of “is versus ought” type questions boil down to special cases of this concept.  To state “circumstances are this way, so one should do A” is not to state “circumstances should be this way, so one should have to do A”.

Example.  Here I quote a scenario I laid out in my longer post:

[There are] two drivers, Mr. X and Ms. W, who each choose to drive at a certain speed at a particular moment (let’s call Mr. X’s speed x and Ms. W’s speed w), such that if either one of them goes just a bit faster right now, then there will be a collision which will do a lot of damage resulting in a decrease in utility (let’s again call this y).  At least naïvely, from the point of view of Mr. X, it doesn’t make sense in the heat of the moment to compute the optimal change in w as well as the optimal change in x, since he has no direct control over w.  He can only determine how to best adjust x, his own speed (the answer, by the way, is perhaps to decrease it or at least definitely not to increase it!), and apart from that all he can do is hope that Ms. W likewise acts responsibly with her speed w… If y represents utility, then our agent Mr. X should increase x if and only if ∂y/∂x is positive.  After all, he has no idea what Ms. W might do with w and can’t really do anything about it, so he should proceed with his calculations as though w is staying at its current value.

That’s what each agent should do.  I’ve said nothing about how much either of them is deserving of praise or blame in the outcome of their actions.

The proposition states that in fact without knowing further details about exactly what the two drivers did, we have no information on how blameworthy Mr. X is for the accident.


 

To state it (or perhaps overstate it) bluntly, I cite this “agency ≠> responsibility” proposition in an attempt to remedy what I believe is a ubiquitous fallacy at the bottom of many if not most misunderstandings.  I wish everyone in the Hawks and Handsaws audience a Happy New Year and look forward to writing more here in 2017!

new-year-s-eve-party-horn-free-gifs-free-animations-clipart-borders-vel7qn-clipart

Confronting unavoidable gadflies

[Content note: An elaboration of something I’ve tried to describe before.  I didn’t even try to avoid serious political issues this time.  Welfare, death penalty, generational conflict, religion.]

This is a follow-up to “Speculations of my inner gadfly“.

In my earlier gadfly-related post, I tried to describe an idea that had been buzzing around in my head for some time (pun intended?  I’m not sure) which helps to describe how I view certain types of disagreements and bad arguments.  I think it turned out to be one of my better-written entries for this blog and by some measures seems to have been the most popular.  And yet, when I look back on it, I feel like I was mostly pointing out something already obvious to everyone (despite my repeated hedging of “I don’t mean only to point out the obvious here…”) and didn’t manage to really capture of the essence of the common role of “gadfly speculations” as I see it.  This post will be in large part an attempt to clarify my ideas by taking the whole “gadfly” concept in a slightly different direction.  (By the way, most of the terminology and metaphors I’ve come up with so far for expressing my thoughts on this blog make me wince, but I think I actually like the general gadfly metaphor, so I’m going to run with it as long as it doesn’t wear out.)

I. The inevitable truth of grand-scale speculations

Before really getting into the meat-and-potatoes of this post, I need to clarify one important point.  In the other gadfly-related essay, I described inconvenient, perhaps ridiculous-sounding possibilities which may or may not turn out to be correct (and very often aren’t) but stressed that we have to face them anyway rather than brush them aside.  I pointed out that you can always evaluate their likelihood later, but it’s important to at least let them enter your conscious consideration first.  While this certainly wasn’t an invalid point for me to make, I’m afraid it may have been misleading in terms of conveying the way I usually think of “gadfly speculations”.

The fact is that most social controversies that we find ourselves considering involve large numbers of humans and their motivations, the effects that a certain course of action may have on them, and so on.  In these situations, practically every possibility that realistically occurs to us regarding the way some humans might act is correct, but perhaps only for a small minority of the humans involved.  In fact, as soon as such a speculation occurs to us, unless it’s completely bonkers at the level of lizardmen conspiracy theories, it must be true at least occasionally or at least for a few people.  In fact, it would seem very strange if it were never true.

For a real-world example, take the constant debate over government-provided welfare.  Fiscal conservatives tend to argue, or at least insinuate, that a number of citizens on welfare are using these government programs to game the system in some way.  And regardless of our political affiliations, when we stop to objectively consider this, we have to agree that in a certain literal sense this is correct.  The key phrase in the proposition mentioned above is “a number of”.  It’s not clear exactly how many people are gaming the welfare system.  Maybe they are so few as to be irrelevant when the benefits of having a social safety net are taken into account.  But if we have a country where millions of citizens are on welfare, and the welfare system is pretty complicated, then it stands to reason (or at least common sense) that there is a feasible way to abuse it and that some of those citizens are in fact abusing it.  It would really be astounding if nobody were abusing it.

Similarly, if we all assume for the sake of argument that certain sufficiently heinous criminals “deserve” the death penalty (I put “deserve” in quotes because I don’t really know what that means, but that’s a topic for another post), then we all have to admit, regardless of our stances on the death penalty, that the proposition “Some defendants will be wrongly convicted” is correct.  The key word is “some”.  This is a weaker example than the last one, since far fewer humans have been sentenced to death in modern history than are on welfare, but I still suspect that the forensic science involved is so complex and still imperfect enough even today that there must be wrong convictions at least occasionally.  I would be astonished to find out that there have been zero wrong convictions in the last several decades.

Now I realize that there are far more outlandish suggestions out there regarding every controversy that affect so many people’s lives, and maybe it’s plausible that some of the most extreme ones don’t hold for any of the humans involved.  For instance, I seriously doubt that a single one of the millions of individuals on welfare is secretly trying to trying to aid a band of extraterrestrials bent on taking over the earth through weapons which can be powered only by government-signed welfare checks.  However, most speculations this far out in left field aren’t pervasive in the common discourse and generally don’t enter our minds (even subconsciously) in the first place.

So these uncomfortable thoughts that gadflies persistently whisper to us generally don’t have a chance of being completely false.  In fact, as soon as we hear them, we are obliged to admit that it would be quite shocking for them to be entirely false.  Evaluating them becomes a question of to what degree and on how great a scale they are true.

I reiterate what I said in the other post: we tend to dismiss these inconvenient ideas out of hand because acknowledging them means more work for us in our assessment of any situation, and our brains are lazy.  If we acknowledge that at least a few folks will abuse the welfare system, then that obligates us to go through a tricky cost-benefit analysis when arguing in favor of it, which is considerably more difficult than emphasizing more and more stridently that welfare provides necessary aid to many citizens.  And yet, if we at least attempt to argue that abuse of the welfare system is sufficiently rare, then that obligates our opponents to rebut that with an attempt to show that such abuse is unacceptably frequent (rather than argue against welfare simply by complaining that it can be abused), and a potentially productive discussion ensues.

There is an anolog of this notion in the context of small-scale conflicts — say, drama between two individuals — as well: many of the possibilities that try to latch themselves to our minds are almost certainly true on some level.  For instance, if it occurs to you that the reason your friend didn’t show up to your party has something to do with an unintentionally rude remark you made to her the week before, then that is probably playing some role (however small) in her behavior, even if the primary reason for her absence turns out to be an unusually high level of work-related stress.  But this doesn’t apply in nearly as absolute a way as it does for issues involving more people.  And for the purposes of this post, it’s mostly large-scale debates that I’m interested in.

II. The inevitable use of grand-scale debate tactics

Now let’s kick it up a level: in debates which involve a large number of humans, pretty much any speculation about how the opposing side will argue must be correct.

A. The Boomer-Millenial Conflict for Dummies

Here’s a good exercise for considering how a given position might be argued: pretend that you’re an alien with no knowledge whatsoever about human history or problems but who wants to argue a particular side of a human controversy of which you know only the basic definitions of relative terms, with the minimum possible extra research.

Take, for instance, the constant rhetorical warfare between the baby boomer and millenial generations.  Suppose you were an alien knowing nothing about American culture, generational subcultures, or any of the dynamics involved.  You only know the definition of “baby boomer”: it’s a human born during the “baby boom” from the mid-40’s to the mid-60’s, which is so called because of a marked increase in the birth rate.  How would you go about attacking baby boomers?  Well, let’s see, the first thing that comes to mind is that because by definition there are a lot of them, they are to blame for what in some people’s minds might be a dangerously high population.  But you can’t go far with this criticism, because nobody can be reasonably held to blame for having been born.  So what occurs to you next?  Well, again, tautologically there are a lot of baby boomers; they make up a disproportionately large portion of human population.  So if there’s any fault that baby boomers are likely to be prone to, it might be… that they have an over-inflated sense of self-importance, or they behave as though everything is about them, or something.

And sure enough, it’s not hard to find articles like this one, or books like this (see Chapter 7).  I also distinctly remember the preachy right-leaning political comic strip Mallard Fillmore characterizing baby boomers this way (clumsily paraphrasing from memory: “This just in: baby boomers have finally realized that society doesn’t revolve around them!  Unfortunately, they now think it revolves around the federal government.”), but after half an hour of searching for old Mallard Fillmore strips with roughly those words, I can’t find it.  And yes, if I google “baby boomers”, the first attack articles I find are ones which accuse baby boomers of ruining the economy for millenials, since a lack of jobs for young people is the biggest specific issue at play in the inter-generational war right now.  But one has to admit that the hypothetical alien who knew nothing about our current economic woes did a pretty good job at coming up with an anti-baby-boomer talking point which is actually used substantially in the real world, given a bare minimum of knowledge regarding the baby boomer generation.  The “think everything revolves around them” allegation isn’t the primary criticism nowadays, but it is still relevant in the discourse.  That talking point may not usually be backed up by explicitly claiming the source of their perceived self-importance is that there are disproportionately many of them.  But the fact that baby boomers comprise a prominent demographic certainly strengthens the credibility of the “think everything revolves around them” criticism.

So if one who is looking to defend baby boomers goes through the above exercise, the result is a gadfly speculation on opposing debate tactics rather than the facts of the generation-war issue itself: “But the opposition might try to frame things in terms of baby boomers thinking everything’s about them!”  And this turns out to be true, to some extent.  For any controversial issue about which many people are arguing in public from all different sides — or even when only two people are debating, but both are passionate and knowledgeable about many aspects of it — any hypothetical talking point that comes to mind in this way will play at least a minor role.

I like the baby boomer example because one can already come up with a possible criticism by considering only the definition of “baby boomer”.  Usually it requires knowing more than basic definitions, but only a little more.  For instance, if you want instead to attack millenials, and imagine yourself as an alien searching for a good anti-millenial talking point based on a minimal amount of research, one only has to learn about one of the main issues involving millenials today: they complain about a dearth of jobs and general broke-ness.  Now forget the specifics of what they’re complaining about, and ask yourself, what’s the easiest route to discrediting someone who complains?  By claiming that they feel entitled, of course (see below).  Or how does one go about lampooning someone who has trouble finding a job just generally falls into some kind of bad fortune?  By portraying them as lazy, or irresponsible, or lacking in judgment or initiative, etc.

B. General examples

Here are some broad examples of opposing rhetorical tactics which are bound to show up, each of which applies to a variety of real-life debates.

  • “This media outlet / group has a pro-X bias!” vs. “Reality has a pro-X bias!”: I’m starting with this one because I think it might be the most pervasive of all of my examples.  If one party complains that the media or a particular outlet of it is biased in some way, then regardless of specifics, the most obvious strategy for rebuttal is to claim that its portrayal of the situation reflects how things really are.  This is particularly visible in conservative criticisms of the media (or particular news outlets) as having liberal bias, which instigates the response that “reality has a liberal bias”.  It is also a prominent feature of the evolution vs. creation debate, as well as other disputes between skeptics and defenders of academic consensus.  When one party makes an accusation of bias, their opposition is pretty much guaranteed to counter that the source isn’t biased but right.  The flip side of this is, of course “This high-profile source says X is true!” vs. “That source must be biased then!”
  • “We have a legitimate grievance!” vs. “You’re just a bunch of whiners!”: This is the hallmark of debates that hinge on reverting to deterministic or free-choice explanations for a current unfortunate situation.  Closely related is the inevitable attack of “your bad fortune is your own fault” aimed at the aggrieved.  There are too many real-world controversies involving this for me to name here, and in fact I’ve tried to argue before that this is a component of all Left-vs.-Right political issues in America.  Nowadays the concept of “privilege” and related terminology usually shows up throughout these disputes.
  • “We got here by hard work!” vs. “You got there by unfair advantage!”: The flip side of the above rhetorical template.  Also frequently seen in disputes over privilege and free choice vs. determinism.
  • “We deserve better!” vs. “You’re just entitled!”: Also closely related to the grievance/whiners exchange.  If one isn’t up for countering that the other party’s bad fortune is manufactured because they’re looking to complain or just their own fault anyway, then one can take this route.  Whatever “entitled” even means.
  • “Our lived experiences have made us wiser!” vs. “Your lived experiences have made you paranoid / naïve!”: I’ve seen this show up in a lot of more personal conflicts — by claiming experience as evidence of wisdom, one opens oneself up to suggestions that experience can distort one’s perceptions to one’s disadvantage as well.
  • “Person/group X sounds overconfident / refuses to admit mistakes!” vs. “Person/group X is just really smart / hasn’t made a mistake!”: This is a variant of the example above.  I remember it being a major theme of the discourse last decade during the Bush administration.  A further variant is “Person/group X is closed-minded!” vs. “Person/group X just won’t put up with nonsense!”  These stances are often taken by the “teach the controversy” anti-evolutionists versus the “creationism isn’t science” defenders of Darwin’s theory… although interestingly the roles were pretty much reversed back at the time of the Scopes Trial.
  • “You’re afraid to debate!” vs. “We won’t descend to your level by engaging with you!”: Closely related to above.  Another major component of the creation/evolution conflict (yes, creation/evolution provides many good examples).  Epitomized by Richard Dawkins’ refusal to debate the “ignorant fool” Ray Comfort.  However, I’ve seen show up in the context of many other topics where one side sees itself as far more educated than the other.

C. Debating debate tactics: the “motte-and-bailey” debacle

Some of the common recurring themes mentioned above come close to describing not only potentially fallacious tactics used to debate an issue but even to debates over potentially fallacious debating tactics.  It seems not uncommon in discussions between rationalists for one party to accuse the other of a committing a particular fallacy — say confirmation bias, or assuming a strawman — only for the other to point out that sometimes what looks like confirmation bias or a strawman happens to reflect the truth anyway.  To show that I don’t always fail at finding cartoons posted online that I remember reading once, here is a relevant Calvin and Hobbes panel (apologies to Bill Watterson).

calvinhobbes

If someone argues using language that sounds overly-broad, it’s almost certain that their opposition will accuse them of the fallacy of black-and-white thinking.  But in some way or another, the first party will very likely retort, like Calvin in the panel above, that sometimes that’s just the way things are.  (By the way, Watterson has stated that this cartoon was inspired by his own struggles in a legal dispute in which he was accused of black-and-white thinking.)

To give a more interesting example of something that caused some disagreements within the rationalist community, in one of his more popular posts, Scott Alexander characterized certain types of rhetoric as relying on a fallacy that he calls “motte-and-bailey”, which refers to equivocation between one very convenient sense of a term (assumed most of the time) and a different but much more defensible sense of that term (adopted whenever challenged).  The “motte-and-bailey” terminology was actually coined in an academic paper written years earlier, but Alexander’s article popularized it within the online rationalist movement.

Some months later, his fellow rationalist essayist Ozy banned the use of this concept on their blog Thing of Things, later writing this to further elucidate the potential pitfalls of using “motte-and-bailey”.  Evidently the term was being abused a lot in Thing of Things comments sections.  But here’s the conundrum: any new concept can be abused in some way.  When introducing a new concept, even the concept of a certain logical fallacy to an audience comprised of rationalists, one should always be able to imagine the ways it will be abused and recognize that given a large enough audience, it will be abused in that way.  In the case of “motte-and-bailey”, it is a good exercise to ask ourselves what might be the most convenient way to use it to attack any position one doesn’t like.  Well, the substance of the concept is that a “motte” is a defensible definition of a term which can be quickly adopted when one’s ideas are challenged (“God is the feeling of purpose we perceive in the universe”), while a “bailey” is a convenient definition tacitly assumed otherwise (“God is the petty, vengeful main character of the Old Testament”).  The point is to criticize one’s opponent for defending their ideas by using a defensible (“motte”) definition which they don’t assume the rest of the time.  So it seems all too tempting to… criticize one’s opponent for using a defensible definition even when they do consistently assume it all the time.  (Maybe you’re arguing against a very liberal theist who really does believe only in the “vague purpose” kind of God, and Old Testament fundamentalism is a strawman of their belief system.)  So in other words, exactly the abuse that Ozy described having seen.

If you introduce a new rhetorical concept to a bunch of rationalists, there’s a pretty good chance of somebody invoking it unfairly to attack arguments they don’t like; then there’s also a pretty good chance that someone else will anticipate the possibility of this abuse and unfairly invoke that to attack arguments they don’t like; and the recursion goes on ad infinitum.  Maybe “motte-and-bailey” also happens to be easily abusable to begin with.

But all that doesn’t mean that useful concepts like “motte-and-bailey” shouldn’t be popularized in the first place.  And I guess that brings me to my usual “proposed solution” section of this essay.

IV. How to oppose opposing gadflies

I’ve tried first to make the point that when participating in discourse on certain types of broad issues (particularly social), almost any statement inconvenient for our position that might occur to us is probably true to some degree and moreover will occur to at least some people on other sides who will use it against us.  This makes my view of success at discourse, or even being sure what one believes in the first place, sound pessimistic.  And it is, somewhat.  Becoming reasonably sure of something and being able to actually convince others of it in an intellectually honest way is (at least for me) very, very hard.  But there are still ways of dealing with those gadflies that almost surely oppose us.

First of all, there’s one of the oldest debating guidelines in the book: anticipate opposing arguments.  I spent a lot of time illustrating certain very general types of claims that are sure to be encountered (“your grievance is your own fault”, “so-and-so sounds confident because they in fact are always right”) because, despite the fact that they sound completely obvious when written down in this context, many people in the heat of argument often don’t see them coming because they’re not thinking enough from their opponent’s point of view.  So anticipate them.

The second, and probably more difficult, tactic is to realize that these inevitable counterclaims are probably at least a little bit true and to readily acknowledge this.  That’s not to mean that constantly bending over backwards to agree that every criticism and accusation is kinda-sorta valid is an effective way to win anyone over to one’s position (I err in this direction a lot, so I would know).  But flatly denying that the offensive thing one’s opponent was bound to suggest is almost certain to make things worse.

So the best strategy is probably to admit that our opponent’s suggestion is probably correct for a few people, or just a little bit, and claim (and then make an honest effort to back up the claim) that our position is right anyway.  “Yeah, any welfare system opens itself to the possibility of abuse by a few people, and that’s awful.  But it’s far more important for honest people in need to be able to have a safety net of this kind, because X, Y, and Z.”  Or, “yeah, that group sometimes whines a little more than justified, but they have a legitimate complaint even so because Y and Z.”  Or even, “Yeah, I know that I can moan and be a little melodramatic at times, but that doesn’t mean that my feelings are invalid in this case, because X.”

This is particularly worthwhile, but particularly tough, when one is confronted (or anticipates being confronted) with a personal attack.  There’s a common reaction, which I’ve observed in people close to me, of “On top of being completely wrong about [issue on the table], he has the nerve to keep bringing up such-and-such personal flaw of mine.  He’s lost all credibility with me about [issue], so the personal attack is obvious nonsense.”  (Here the personal fault in question is often something that many have criticized the speaker about and which maybe even the speaker has acknowledged in calmer moments.)  In my opinion, this is almost always the wrong way to look at the situation.  If I’m arguing with someone in my life about Big Important Issue on which I believe they’re totally mistaken and out of line, and they keep shoving in my face some criticism of me that others have made in some way or another, and which I’ve previously acknowledged is somewhat true then… I try to recognize that they’re probably right in their criticism.  They wouldn’t be using the criticism as a weapon to argue their side of the Big Important Issue if it weren’t somehow readily available to them, and it wouldn’t be so available to them if it weren’t somewhat true.  So my response should be to acknowledge immediately that “yeah, I sometimes can be that way” but argue that my faults still don’t imply their side of the Issue, or (in some cases) that they’re completely irrelevant and being used easily but unjustly as a weapon against me.  Of course I still fail at this from time to time, but my successes have gradually made admitting my own faults in this way much easier.

The thing is that no matter how small of a gadfly is staring us down, our adversary can still hide behind as long as we dismiss it even while it tells just a tiny bit of truth.  Engaging with the gadfly actually exposes our adversary and leads to a more productive outcome for everyone involved.  And that is a bit more of my take on why it’s important to welcome gadflies into our minds.

Obligatory election-day post on the rationality of voting

[Content note: Again, the title pretty much says it all.  Minor discussion of religion-inspired ethics.]

There are a number of rhetorical situations where I see recurring patterns of what feels like obviously fallacious reasoning and have learned that trying to convince someone who doesn’t instinctively sense that same pattern will lead only to frustration on the part of both parties.  But in many cases, I have discovered through the rationalist community a group of people who all seem to acknowledge the same underlying issues, even if there’s plenty of healthy disagreement on exactly where and to what extent those fallacies are being committed and as to what antidote should be applied.  Some of these things I’ve even tried writing about in my own words, such as the mistake of confusing causal agency with moral responsibility in multivariate situations or the subconscious tendency to not acknowledge inconvenient hypotheses.  I can’t exactly take a poll of how everyone reacts to these rationalist topics that I bring up, but it certainly appears that most people who are interested in rationality and have the patience to engage in discussions of them are in rough agreement despite perhaps disagreeing with how I describe or apply things.  It hasn’t proven controversial to claim things like “There’s a fundamental problem with how people assign moral blame in situations where more than one party created a disaster” or “One shouldn’t shun inconvenient thoughts before they have a chance to fully form” or even more philosophically contentious positions like “By debating the degree of ‘free-ness’ of certain actions rather than what our reaction to them should be, we are asking the wrong question.”

I have recently discovered that such is not the case when it comes to my rationality-motivated objections to how many people think of voting.

A few months ago, I brought up my contention that people often seem to abandon consequential utilitarianism when it comes time to vote on a Slate Star Codex open thread.  I posted the following comment:

I’d like to put in a request for a post (preferably sometime between now and the election) on the motives behind abandoning consequentialist utilitarianism when it comes to voting. It seems like most people accept consequentialist utilitarianism as a matter of course for most choices, but then treat voting almost as a mode of self-expression.

In case it’s not clear, I was alluding here to my long-time frustration with those who say they’ll vote only for candidates they positively like, rather than for candidates who are able to win or the lesser of two evils, etc.

At the time, I was assuming that everyone would basically agree with me but point me towards a good explanation or at least a better way of phrasing the problem.  To my surprise, I found that my assumptions were completely mistaken regarding the general rationalist community sentiment when it comes to voting, or even when it comes to consequentialist utilitarianism.  As one commenter said,

If you think that people are “abandoning consequentialist utilitarianism when it comes to voting”, then that doesn’t just mean you’re completely confident you’re right about the consequentialist utilitarian consequences of voting, it also means you think that reasoning is so obvious that you expect everyone else to think the same way. This is absurd. Even in this thread there is a broad range of opinions on this matter.

I learned a lot from the responses I got to the above-linked comment, and other online discussions on optimal voting strategies that I’ve witnessed since have further opened my eyes to the variety of viewpoints rationalists hold on this general topic.

A lot of the crux of our differences can seemingly be traced back to different takes on variants of Newcomb’s problem.  I decided after the aforementioned discussion on Slate Star Codex that I would research Newcomb-like problems and try to further cement some sort of opinion on it along with solid justification, in time to write an incisive, well-argued, polished blog post on the rationality of voting in time for the presidential election.  However, I failed to do my homework here and have not made much progress on understanding the different points of view on these topics.  Therefore, once again I don’t quite have the incisive, well-argued, polished blog post that I wanted and have decided instead to make do with an attempt to succinctly write down my current thoughts maybe from a more personal angle.  Maybe this is for the best, because sometimes I suspect that indefinitely delays in an effort to do the ideal amount of research and thinking will lead to me writing something that still falls short of feeling ideally incisive, well-argued, and polished, while I often wind up happier with my more personal, thoughts-in-progress writing anyway.

So here are the main issues which seem to play into the question of what it means to vote rationally, along with my and other people’s thoughts on them.

I. The assumption of utilitarianism

I’ve embraced utilitarianism as the only reasonable source of ethics since I was old enough to ask myself what my source of ethics was (which I guess was around high school or so).  I realized pretty quickly on discovering the rationalist community that utilitarianism, specifically consequentialist utilitarianism, seems to be the dominant belief within it.  Results from surveys such as this one seem to bolster this impression, but note that this survey shows 60% of the participants as being consequentialists, which leaves a lot of room for other views to be influential.

In the aforementioned comment thread alone, there was plenty of argument against my assumed consequentialism, which if nothing else convinced me that there are many more people with a commitment to rational thinking who don’t find it obvious than I had imagined.  Unfortunately I don’t quite understand most of these people’s points as arguments for a different, coherently-stated system of ethics.  It seems that many want to point out that humans do not in reality make most of their decisions according to consequentialism.  Most decisions, they claim, are impulsive and depend mainly on what “feels better” at the spur of the moment.  Maybe the reason why a lot of people vote is simply that it gives them a vague feeling of power in having a voice in their democracy.  In other words, they believe in the advice of journalist Bob Schieffer’s late mother.

0c150e92049317c409eea478cb96a4c5

My first reaction to this is that here, by claiming that consequentialism isn’t valid because it’s not how people actually make decisions, these commenters seem to be advocating a purely descriptive definition of morality.  For me, the obvious problem with this is that it ultimately leads to confusion between moral behavior and the way people actually behave on average.  Here I’ll leave it to the reader to insert whichever go-to example they prefer of crimes against humanity committed at a particular place during a particular time period in order to show that this notion is absurd.

But maybe nobody is claiming that common human decision-making behavior actually determines which ethical framework is valid.  Maybe their point is that the tendency of folks to act according to (non-utilitarianism-based) impulse in most aspects of their lives shows that they way they think about voting doesn’t contradict their ethical worldviews in the way I brought up in the open thread comment.  After all, if humans don’t in fact generally rely on consequentialism to make their decisions, then there’s no apparent contradiction when they say they’ll vote in whichever way makes them feel better or for whichever candidate better reflects their values.

To respond to this, I have to go back to the ultimate reason why I identify as a utilitarian, which I’ll do my best to explain briefly even though I can’t give an ironclad argument in its favor.  (Although, one shouldn’t expect a complete “proof” of any ethical system, since concepts of “rightness” and “wrongness” can’t be introduced without some axioms.)

The best personal explanation I can come up with is that utilitarianism seems like the only system for deriving ethical statements that has a completely coherent and self-contained definition, modulo the somewhat open-ended concept of “well-being”, or utility.  Therefore, when we humans consciously justify our decisions, we tend to imply in our explanations that we made the choice which led to a net increase in utility.  When we argue about whether our decisions were right or wrong, it boils down to conflicting opinions about which outcomes actually increase/decrease utility, even as the assumption that we all want to maximize utility is taken for granted.  So even impulsive decisions like choosing to stay in bed an extra twenty minutes after one was supposed to get up are either not justified at all (“I shouldn’t have stayed in bed late, but my tiredness just sort of took over”) or justified as having increased utility (“I stayed in bed late because it felt better for me, and it was worth it because of X, Y, and Z”).  I’m not saying that such decisions are made in the first place according to utilitarianism.  I’m saying that if they are consciously justified afterwards, they will be implicitly justified as actions which were likely to result in the greatest net change in well-being.  In my opinion, this is because such justifications form the only chains of reasoning which remain completely meaningful.

Yes, some people very deliberately take a non-utilitarian stance.  For instance, many believe in a god or gods as the source of all morality, and hold that “God forbids it” is reason enough not to do a particular thing.  But when pressed on exactly why God would forbid that particular thing, either the chain of reasoning must stop at “He/She/They has mysterious ways” or some sort of argument which appeals to something apart from the divine (“God says that stealing is wrong!  Why does He forbid it?  Well, how would you like to be robbed of things which you worked hard to get?  [etc.]”).

So yeah, I do think that most people, when they are calmly thinking over their own choices and not in the midst of acting impulsively, instinctively rationalize what they do in utilitarian terms.  They choose not to steal because it would do harm to the person stolen from, as well as contribute to societal instability where private ownership is concerned.  They choose to recycle because it’s better for the planet which in turn benefits every living thing on it in the long run.  They might even prefer a certain political candidate because their policies would be better for the economy and therefore increase the well-being of people within their constituency.  So my initial concern still stands: why do so many seem to back away from this sort of rationalization when considering their voting behavior?

(I’m happy to admit by the way that I see certain limitations in utilitarian reasoning, especially when it comes to issues involving creation or suppression of life.  Therefore, I don’t believe that this system of ethics provides good answers to questions relating to, for instance, abortion, or population control.  I’m not sure whether that means that I’m not fully a utilitarian, or whether one could derive some enhanced set of utilitarian axioms which would solve these problems.)

II. The assumption of one-boxer-ism

A lot of the rationalists I’ve been hearing from do seem to be on the same page as I am with regard to consequentialist utilitarianism, but still disagree with me on the purpose of voting.  They say that if the only reason for voting were to directly influence a current election, then there wouldn’t be much reason to vote from a utilitarian standpoint, since your one vote has an astronomically low chance of single-handedly swinging an election.  “All right,” one may ask them, “so why do you think so many people do take the trouble to vote, and do you feel that they are being reasonable in doing so?”  One plausible answer to this may be that voting still serves a practical purpose apart from directly determining elections as elections serve the function of polling the desires of the people.  If you vote for the candidate whose values you truly agree with, even if they are not one of the main candidates, that helps to send a message to the community of politicians which will surely do some good in the long run.

While I agree that voting does serve this purpose, and it might even be my main consideration if for instance I lived in a solidly non-swing state of the US, I still hold that a lot of the time it is trumped by the purpose of directly swinging current elections for the reason which I articulated in the afore-linked comment thread:

[P]eople mostly seem to understand the whole Prisoner’s Dilemma idea that if you decide to do something for a reason, then you should assume that many other people are making that same decision for that same reason, and that en masse voting is extremely effective.

In other words, I strongly believe, or at least some instinct inside of me compels me to strongly feel, that I should act in such a way that the best outcome might be brought about if all other like-minded people also act in that way.

It turns out that attempting to justify this strange conviction that one should act as one would like all like-minded people to act is tricky and runs into potential paradoxes.  This conundrum is encapsulated in Newcomb’s Paradox (of which the famed Prisoner’s Dilemma is a variant).  Like I said above, I haven’t gotten around to researching any of the volumes of argument on both sides of this problem.  I have read Eliezer Yudkowsky’s introduction, and someday I hope to take a look at his lengthy paper on it.  I would worry that only having read Yudkowsky’s analysis might have biased me towards his one-boxer position, except that it’s sort of clear that deep down inside I’ve been a one-boxer all along.  This is because the one-boxer position is the one corresponding to the “cooperate” choice in the Prisoner’s Dilemma, or the “vote so that like-minded people also voting that way would achieve the best outcome” choice in our Voter’s Dilemma.  And even though on close inspection it seems very non-trivial to justify, I see now that my whole life I not only felt convinced of it down to my bones but had been assuming that all reasonable people believed felt the same way.  In other words, it never occurred to me that anyone would argue against the notion that voting is good on the individual level because there are positive consequences when large groups of people vote a certain way, just as littering is bad on the individual level because there are negative consequences when large groups of people litter.

Currently the topic of Newcomb-like problems occupies roughly the same position for me personally as the topic of free will did about 8 or 10 years ago: it’s a problem for which I feel some strong intuition but haven’t yet managed to wrap my mind around all the implications or formulate a clear position and which I firmly believe has highly relevant real-life implications.  Applications to how to vote rationally are an obvious example of them.  See, for instance, this article which more or less argues a more sophisticated version of my position.

But yeah, I feel this way on a instinctual level, so deeply that I’ve been willing to put in significant time and effort in figuring out how to vote from abroad and why my faxed-in ballot apparently wasn’t legible on the first take and so on… all out of this weird faith that my willingness will somehow “make” other people currently in my situation find the same willpower.

But intelligent people don’t all think the same way in Newcomb-like situations.  This fact helps to explain a lot of attitudes about voting which appear irrational to me, and thus does give a partial answer to my original query.  Of course it does not help me to truly understand how such attitudes aren’t still, well, irrational.  Understanding that may require me to change my strongly-felt-but-vague positions on things like Newcomb’s paradox.  I don’t know whether this is an impossible feat or whether a clever enough argument (along with my becoming a clever enough person) would be enough to accomplish it.

III. “Immoral” voting

There is another small aspect of the “vote only for candidates you actually like” attitude where I think I can offer a little more insight.  I have noticed that some people go beyond just saying they don’t want to vote for any candidate that doesn’t meet their moral standards; they claim in fact that it’s downright wrong to vote for someone you don’t genuinely like.  I’ve heard language like “going against my morals” used to describe holding one’s nose and casting a ballot for the lesser of two evils, sometimes by those who choose to do it anyway.

I first want to be a little on the pedantic side and fault those who think that lesser-of-two-evils voting is immoral but wind up doing it anyway for being inconsistent.  Technically, I don’t see actions as being absolutely ethical or unethical in and of themselves; it is choices of certain actions over other actions or inaction that can be labeled as “right” or “wrong”.  If something is immoral, then that means that one shouldn’t make the choice to do it, period.  Or, to state the contrapositive: if one chooses to do X, then that means that X is more moral than other available actions or inaction, and therefore one’s choice was moral.  And although this criticism doesn’t directly apply to those who believe that voting for the lesser of two evils is immoral and then don’t do it, I think it still underscores some of the fuzzy thinking behind a lot of the sentiment against lesser-of-two-evils voting.

Secondly, in trying to put myself in the mind of someone who thinks of voting for a detestable candidate in order to oppose someone even worse is “going against their morals”, it occurred to me that there’s some sneaky variant of the “causal agency implies blameworthiness” (related to “is-versus-ought”) fallacy going on here which I made a point of in my post on “multivariate utilitarianism” (you have to scroll all the way down to subsection III(D), sorry).  It’s tempting to feel that if you voted for a bad presidential candidate, then you share some portion (however tiny) of the blame for them winning.  After all, you made a free choice which contributed to an unpleasant result which would not have occurred if you and other like-minded people hadn’t made that choice.  But that’s ignoring the fact that a decision between two undesirable options was foisted on you by circumstances, circumstances which were caused by other parties.  And so the brunt of the blame shouldn’t necessarily fall on you.  In fact — and this is one key difference between this situation and the ones I discussed in the post linked to above — you had no better options, so really none of the blame should fall on you.  Still I suspect that the idea that it’s inherently immoral merely to vote for an unattractive candidate has some of the same misconceptions underpinning it as the whole “causal agency implies blameworthiness” thing has.

IV. My endorsement on how to vote in 2016 (and in general)

It’s finally time to stop beating around the bush.  I chose the words of this section heading carefully: I want to describe how I think one should vote in elections in general (at least in countries like America which have a strong two-party system), not whom to vote for.

Here at Hawks and Handsaws, we are firmly against imposing our own personal political convictions on readers.  Therefore, I will illustrate an example application through a purely hypothetical situation.  Let’s say that we have a presidential election in which one candidate, whom we will denote by H, is a shrewd and very able politician mired in a corrupt political establishment who has a lot of potential skeletons in their closet and who is somewhat hawkish and not especially idealistic, in contrast to another politician we will call B who was their main opposition in their party’s primary election.  Let’s say that the opposing candidate in the general election is someone whom we will call D, who has never been a politician and generally proves themself to be a complete buffoon by repeating mostly-nonsensical platitudes with almost no actual substance behind them which yield not the slightest evidence that they understand anything about the challenges faced by their countrymen, who might be more hawkish than their opponent but you can’t really tell because their platform seems to be all over the place, and who on top of that has risen to popularity within a certain subset of the electorate by repeatedly producing outlandish bluster seemingly calculated to fan the flames of anger and bigotry.  Let’s say that you dislike both candidates H and D, but have to admit that D would be a considerably worse president than H would, although you would have strongly preferred B.  Then I recommend the following:

  1. Rewind back to the primary election that took place in your state between H and B.  You should vote for B in that primary if and only if they seem like the best choice after taking several things into consideration, including B’s likelihood of beating whomever the opposing party nominates, as well as B’s probably effectiveness at president.  You should not base your choice purely on the fact that B seems like a better person with better values.
  2. In the general election, no matter how much you may hate H, as long as you’re convinced that D is substantially worse, you should vote for H unreservedly and with a clear conscience.  No voting for third-party candidates even if their values align with yours much better than H’s do.  And no avoiding the polls altogether.  As a general rule, whenever you perceive a significant difference in attractiveness of candidates in an election, from the one-boxer utilitarian standpoint, voting is always imperative.
    (Note: this general idea is often articulated as “remember, a vote for a third-party candidate is a vote for D”, which is incorrect not only literally but also in the sense that really, a vote for third-party is equivalent to half a vote for D or to throwing away one’s vote altogether.  By symmetry, members of the pro-D camps will often claim that “a vote for third-party is a vote for H” when again it makes more sense to consider it as half a vote for H.  The fact that both can’t be true simultaneously is itself proof that neither should be taken quite at face value.  But obviously I agree with the underlying sentiment.  (Further note: of course I’m making the simplifying assumption in all of this that all we care about is directly affecting the current election; as I’ve acknowledged above, there are times when it makes good sense to vote third-party.))

The purpose of voting is not to serve as a form of self-expression, or of cheering for the team that you like.  It is not (in America, at least) even primarily a way to communicate to the political world what your ideal candidate or platform would be, except in certain circumstances where the overall result is a foregone conclusion.  The purpose of voting is to influence which individual out of a very small group of finalists will be elected to a position of significant power.  Yeah, I know that what I’m preaching is based on convictions which I haven’t been able to fully justify.  But even in the absence of solid argumentation, I’m still allowing myself to stand on my soapbox and proclaim how I feel about voting, on the eve of what looks to me like a pretty crucial election for America and for the world.

And with that, I leave you with a variation on the wisdom of Bob Schieffer’s mom: go vote; it’ll make you feel like a good one-boxer consequentialist.

Speculations of my inner gadfly

[Content note: This is something I’ve been thinking about which feels somewhat clearer in my mind than it comes out in writing.  However, I’m already having doubts about how the connection to superweapons works.  Mentions of several sensitive issues for examples, included in tags.]

It is a common criticism from those who have known me for long enough that I’m too gullible.  Sometimes this is meant in the basic sense of believing false things (especially when I was younger), but also sometimes in the sense that I come across as much too immediately accepting of whatever broad narrative is pitched to me in defense of a particular view.  Enough independent people from different parts of my life have expressed concern about this that it’s only logical for me to conclude that the criticism is probably valid on some level.  At this point in my life, it’s more a matter of in which sense is it valid, what underlies this tendency, and which aspects of it are helping me as opposed to hurting me.

There’s more than one issue at play here, but here I want to focus on one particular type of fallacy which I consider to be a major problem with a lot of the discourse I see, and which I’m trying to guard against when I react to claims in a way that makes me look too credulous.  This problem in the world of discourse can be summed up by saying that we’re not welcoming enough to gadflies.

I. Socrates the Gadfly

I am not particularly knowledgeable with regard to ancient Greek philosophers, but I am familiar with Socrates’ characterization of himself as a “gadfly of the Athenian people”.  What he meant, as I understand it, is that his intellectual function in his society was to articulate skepticism and raise nagging doubts in the face of commonly-held assumptions.  In other words, he aimed to be what is more commonly called a “devil’s advocate”.  According to him, gadflies are understood to create discomfort and to generally be annoying, but they should be welcomed.  Apparently in trying to defend himself from the death penalty, he claimed, perhaps arrogantly, that he was the only gadfly in the area, and that they would be unwise to get rid of him as gadflies are essential to the health of society.

This assertion has been made many times and articulated in many ways since Socrates.  It encapsulates a general idea that is seen most prominently in the philosophy of science, as well as within the deeply-held values at the heart of modern democracies, skeptic/rationalist culture, and academic culture in general.  In any intellectual pursuit, thinking critically and challenging assumptions is key.  I don’t want to write about this very broad notion which has been discussed constantly for centuries.  When I say, “We just don’t welcome enough gadflies”, I’m not trying to proclaim a vague platitude like “We don’t think critically enough!”  In particular, the use of “gadfly” is not meant as a metaphor for challenging authority, or the exercise of skepticism within the scientific process.  (Indeed, I don’t see any sense in claiming things like “We should be more skeptical when doing science”.  The scientific mindset, as Carl Sagan put it, consists of “a marriage of skepticism and wonder”, and in fact my comment about gadflies could be construed equally well to mean that we need more wonder (i.e. open-mindedness) when doing science.  Skepticism and wonder are arguably two sides of the same coin.)

The gadfly behavior I’m advocating today is a more specific thing, which I find easier to describe in the negative: when considering a particular decision or situation, don’t automatically dismiss any of the relevant possibilities that come to mind, even (especially!) if they make you feel uncomfortable.

Let me clarify what I mean by “relevant possibilities” above by use of an example from an earlier post.  Suppose that you arrive at a colleague’s office at an agreed-upon time for a meeting with them to prepare for an upcoming deadline, but they never show up.  Now let’s say that one of your major pet peeves with the world is the way most people around you seem to be disorganized, and that this has really been adding to your stress lately as you rely on a lot of other people.  To make matters worse, although you know you can probably reschedule for late tomorrow afternoon (both you and that colleague often stay after normal hours), tomorrow is your kid’s birthday and definitely not a day you want to come home late.  So naturally, your immediate reaction is to feel really angry.

There are many possible causes for your colleague’s absence, a few of which were discussed in the other post: they might have decided not to bother; they might have simply forgotten; they might have a drug problem (entirely unknown to you) which indirectly resulted in not being able to make it; or they might have gotten into some kind of accident on the way to their office.  Chances are that the first two possibilities above are the most obvious explanations and are the first to leap into your mind — unsurprisingly, these ideas do nothing to abate your anger.  It is unlikely, especially given the narrative you’ve developed about everyone else being disorganized, that either of the last two possibilities will occur to you quickly if at all.  And yet those explanations, while perhaps not particularly likely, are still perfectly plausible.  Maybe your colleague has always had the appearance of being totally together, but is actually struggling with some sort of addiction, or perhaps suffering from a mental illness which has not been apparent to you.  And people do get into serious accidents and have emergencies from time to time.  And so, before acting on your newfound resentment towards your colleague, you should at least consider these possibilities — these are the “relevant possibilities” I referred to above.  I’m not saying they should be deemed as likely, but that they should occur to you, and be objectively considered.

This is not a matter of considering every possibility under the sun and weighing them all equally.  Under most circumstances it seems to be a much more common occurrence for people to be careless or forgetful than for them to have some much more serious reason to not show up for something.  However, in the long run, I believe it pays off to at least allow them to enter your consciousness.

(By the way, let’s say that your colleague did fail to meet you out of forgetfulness, caused in part by the fact that they never saw the meeting as particularly important.  They get that nobody likes waiting around for someone who never shows up, but sincerely don’t understand why you would be this upset about it.  After all, you can just both stay late tomorrow, as you often do, and deal with everything then in time not to miss any deadlines.  It just doesn’t occur to them that there might be a particular reason why you don’t want to be at work late tomorrow.  Maybe they, like you, should make more of a habit of considering more possibilities, especially those which lead to conclusions they don’t want to believe.)

These annoying ideas that we should try our best to come up with, particularly the ones which threaten the narratives we’re comfortable believing, are what I call “gadfly speculations”.  They are not fun to have around, but it’s bad for our intellectual health not to let a few of them swarm our conscious minds and nip at our deliberations on a regular basis.

I want to be clear before going any further that when I say, “Be welcoming to gadflies”, all I’m talking about here is the skill of knowing how to let these speculations fly into one’s head in the first place, NOT how to weigh them once they’re present!  Gadfly speculations are what should happen during a mini brainstorming session.  They are funny-looking blobs to be thrown at a wall regardless of whether in the moment they seem likely to stick.  They are ideas which may seem quite improbable, but which should occupy a spot on one’s mental whiteboard.  Later on, of course, they need to be evaluated on their merits.  Pretty much everyone understands on principle the idea of coming up with a bunch of ideas and then evaluating them to choose the best (or most probable) one, but I have a feeling that a lot of us don’t pay enough attention to gathering a sufficiently varied collection of ideas in the first place.

That is what I’m trying to stress here.  In order to weigh possibilities to arrive at the most rational conclusion, we need to reach the first step of being able to see a healthy variety of possibilities on the table in front of us.  Why do we so often fail at this?  Our intellects tend to be lazy, and we naturally want the first step of any decision-making process to be easier.  One obvious way to make it easier is to give ourselves fewer things to choose from.

Now there’s nothing deep in arguing that we should be careful to entertain enough gadfly speculations.  It’s basically a variant of guarding against “lack of imagination” and more or less standard Biases 101 stuff.  I just want to point attention to how this very unsurprising human tendency plays into some more interesting rhetorical trends.  Or at least, in the likely event that these connections seem similarly obvious, I’d at least like to get this point of view down in writing so that I can easily refer to it later.

(I’ve always enjoyed the gadfly metaphor.  I remember distinctly that back when I was in college and for the first time very interested in starting a blog, I kept trying to think of a name which referred to gadflies.  I wouldn’t be surprised if the word “speculation” didn’t show up in some of these names too, since I’ve always seen myself just suggesting things in blog posts rather than trying to meticulously argue anything.  But at the time, the only name I could come up with that I was reasonably happy with was “Hawks and Handsaws”, and obviously I managed no better many years later when it came to naming this blog.)

II. The building of superweapons

In several posts, most notably these two (see also this), Scott Alexander (who runs Slate Star Codex) expounds upon a rhetorical phenomenon which he calls “superweapons”.  Here is the essential passage from the first linked post:

Suppose you were a Jew in old-timey Eastern Europe. The big news story is about a Jewish man who killed a Christian child. As far as you can tell the story is true. It’s just disappointing that everyone who tells it is describing it as “A Jew killed a Christian kid today”. You don’t want to make a big deal over this, because no one is saying anything objectionable like “And so all Jews are evil”. Besides you’d hate to inject identity politics into this obvious tragedy. It just sort of makes you uncomfortable.

The next day you hear that the local priest is giving a sermon on how the Jews killed Christ. This statement seems historically plausible, and it’s part of the Christian religion, and no one is implying it says anything about the Jews today. You’d hate to be the guy who barges in and tries to tell the Christians what Biblical facts they can and can’t include in their sermons just because they offend you. It would make you an annoying busybody. So again you just get uncomfortable.

The next day you hear people complain about the greedy Jewish bankers who are ruining the world economy. And really a disproportionate number of bankers are Jewish, and bankers really do seem to be the source of a lot of economic problems. It seems kind of pedantic to interrupt every conversation with “But also some bankers are Christian, or Muslim, and even though a disproportionate number of bankers are Jewish that doesn’t mean the Jewish bankers are disproportionately active in ruining the world economy compared to their numbers.” So again you stay uncomfortable.

Then the next day you hear people complain about Israeli atrocities in Palestine, which is of course terribly anachronistic if you’re in old-timey Eastern Europe but let’s roll with it. You understand that the Israelis really do commit some terrible acts. On the other hand, when people start talking about “Jewish atrocities” and “the need to protect Gentiles from Jewish rapacity” and “laws to stop all this horrible stuff the Jews are doing”, you just feel worried, even though you personally are not doing any horrible stuff and maybe they even have good reasons for phrasing it that way.

Then the next day you get in a business dispute with your neighbor. If it’s typical of the sort of thing that happened in this era, you loaned him some money and he doesn’t feel like paying you back. He tells you you’d better just give up, admit he is in the right, and apologize to him – because if the conflict escalated everyone would take his side because he is a Christian and you are a Jew. And everyone knows that Jews victimize Christians and are basically child-murdering Christ-killing economy-ruining atrocity-committing scum.

He has a point – not about the scum, but about that everyone would take his side. Like the Russians in the missile defense example above, you have allowed your opponents to build a superweapon. Only this time it is a conceptual superweapon rather than a physical one. The superweapon is the memeplex in which Jews are always in the wrong. It’s a set of pattern-matching templates, cliches, and applause lights.

The posts linked to above mainly focus on certain trends in the feminist movement, but Alexander uses a number of other examples, and I believe that the concept of “superweapon” can be applied to argumentative tactics regarding a wide variety of issues.  When I first read about superweapons from him, I had mixed feelings.  On the one hand, I was thrilled that he managed to articulate brilliantly a major issue I’d had with a lot of discourse on a lot of topics.  Before reading his essays, the only ways I’d come up with for referring to it required clumsy uses of the word “dogma” — superweapons are, after all, a means of discouraging critical questioning.  On the other hand, I was kind of dissatisfied with using what I saw as a concept handle for a complex rhetorical behavior along with intuitive appeals to its potential to be dangerous.  Maybe it’s the mathematician in me, but I would prefer to break apart these ideas until they are decomposed into atoms in the world of logical fallacies.  Since then, I’ve seen the great effect of approaches of rationalists like Alexander as well as Eliezer Yudkowsky, who has an extremely analytical mind and yet manages to convey many of his messages very clearly using invented terminology to stand in for complex ideas.  Plus, I’ve realized on attempting to decompose these concept handles into more basic parts that it’s really hard and I’m not able to get very far.  So I’m content to live with them for now.

Still, I think I can begin the process of disassembling superweapons by describing them as being made of gadfly repellents.

I should say, each superweapon made of a particular cocktail of repellents which wards off large classes of gadfly speculations (while still allowing a few which are consistent with the narrative the superweapon’s engineer is trying to push).  Think about it: a superweapon’s real source of power is really just its ability to shut down certain lines of argumentation.

For instance, take the example in the quoted passage above about the Jew in old-timey eastern Europe.  The situation is presented as a culture dominated by anti-Semitism gradually constructing a memoplex whereby Jews are always viewed as being at the root of various societal ills: child-killing, bad economy, etc.  But the flip side of this positive reinforcement (which is not explicitly mentioned above but is readily apparent in many real-life examples of superweapons) is an intolerance towards any idea that poses a threat to this narrative.  And in fact, no matter how well that eastern European society manages to reinforce those negative stereotypes about Jews, its assembled superweapon will be seriously lacking in power as long as any skeptical gadflies are buzzing around.  When the main character of the story is accused of trying to steal money from his Christian neighbor, a spectator might open their mind to the gadfly speculation “Well, I know there’s a pattern of Jews being greedy, but I suppose it might be possible that this particular Jew was owed a debt…”  The superweapon has to shut this down immediately.  In fact, in examples like this one, the superweapon has effectively shut down the thought before it’s even properly formed, by hammering an anti-Semitic narrative into everyone’s heads so hard that such contrary notions don’t occur to anybody.  In the unlikely event that someone forms the dangerous thought anyway, I imagine that in the presence of a sufficiently strong superweapon, it would be immediately met with, “Come on, when have you ever heard of a Jew being willing to help one of us Christians?  Don’t they want to kill our children?”

There are many ways to view the superweapon concept, but I hold that when viewed from one particular angle, superweapons are just anti-gadfly machines.  They suppress most gadfly speculations from forming, or they immediately quash the ones that do form.  I’ve been trying to avoid alluding to real-life modern controversial topics, but in case I need to be convincing about the quashing aspect, consider the following commonly-expressed “arguments” used to immediately kill gadfly thoughts (more often implied rather than directly said out loud): “More guns = more violence, so how can an open-carry law possibly make anyone safer?”, “More sex education = more sex, so how can the availability of birth control possibly reduce unwanted pregnancy rates?”, “Drugs cause harm, so how could legalizing them possibly do any good?”, “How could anyone possibly lie about being abused?”, etc.

III. Gadflies and partial narratives

A couple of posts ago, I explored the question of how to make ethical judgments of what I called “multivariate situations” — that is, scenarios where something happens as the effect of decisions made by two or more independent agents.  I suggested (in that post and more vaguely elsewhere) that if Mr. X and Ms. W each act on independent decisions which jointly resulted in some disaster, then oftentimes, Mr. X’s first instinct will be to put all the blame on Ms. W — after all, if she had made a difference choice, disaster would have been averted!  (Of course, Ms. W is likely to similarly blame Mr. X; the contradiction in these symmetric reactions is by itself an argument against this kneejerk behavior.)  I claim now that a key part of the subconscious strategy Mr. X uses to leap to an assumption of Ms. W’s guilt is by quickly shutting down the part of his mind that starts to consider the idea that he could have done something differently.  The most basic shape this takes is the blanket subconscious assumption that other people always have free will while his own actions in this case were determined.

This looks to me as though Mr. X is adept at warding off certain gadfly speculations.  “If she’d looked where she was going, we wouldn’t have crashed!”  “Hmm well, maybe, to be fair, if I had stuck to the speed limit, the accident might have been avoi–”  “NO!  Just focus on the fact that if that irresponsible Ms. W hadn’t been driving so inattentively, we wouldn’t have crashed!!”

A recent post on the blog Everything Studies touches on a similar idea.  There the author discusses what he calls “partial narratives”, interpretations of a situation which are very one-sided not in the sense that they’re wrong, but in the sense that they’re incredibly partial: in order to arrive at them, one “takes the derivative of a single variable, discards all other terms and dimensions, and recreates a reality based on the integration of this particular derivative.”  The main example he considers is Ayn Rand’s portrayal of capitalism in Atlas Shrugged, where Rand pushes one partial narrative about capitalism while ignoring all others.

You have “capitalism is when people can trade freely in voluntary agreements and create wealth through their own work and ingenuity” and “capitalism is when the rich can use wealth to assert power over the poor in order to extract surplus wealth from their labor”. They are both partial truths, like a cylinder is a circle from one angle and a square from another. With partial narratives we square the circle, but it remains difficult to keep them both in your head at once.

In order to push the partial truth that capitalism allows people to “create wealth through their own work and ingenuity”, as Rand did in Atlas Shrugged, it is important that no other partial truths regarding capitalism be allowed to take root in the reader’s mind.  This isn’t necessarily accomplished by explicitly dismissing such troublesome speculations as invalid; after all, that would run the risk of introducing us to those “bad” ideas in the first place (disclaimer: I haven’t read any Rand and don’t know what devices exactly she used to express her views there.  Maybe she did spend a little time in Atlas Shrugged explicitly trying to rebut the “capitalism is oppressive” narrative.  But I believe that this is avoided an awful lot of the time partial narratives are pushed.)  Possibly the best way to quash ideas that challenge the desired narrative is just to proclaim it as forcefully as possible, so loudly that it drowns out all budding skepticism.  “This is a really nice story about how capitalism can lead to great wealth and personal autonomy, but I can also imagine how some poor people might get really screwed over in this syst–”  “NO!  Capitalism does so much good by giving people the freedom to create wealth through their own work and ingenuity!!

Swatting away gadflies again.

IV. My overactive inner gadfly

Now what does this have to do with my being too willing to accept any story that’s put in front of me?

Well, some of the behaviors I preach are things that I myself don’t practice enough, and others are things that I probably take too far.  Openness to gadfly speculations is an example of the latter.

Whenever I hear a narrative, however obviously unlikely, there is a part of my mind which says, “Well it could be that way.”  This goes beyond just accepting a non-negligible possibility of the claims being presented to me; it often involves me coming up with supporting explanations on my own to challenge my instinctive response of “Well obviously that can’t be true.”  The result is often that I choose to assume the truth of what I’m told pending further deliberation.

It happens from time to time that an acquaintance, particularly one who has noticed how fun to screw around with I apparently am, tells me some obviously very unlikely personal detail about themselves as a joke.  And I oftentimes initially act like I believe it (nodding slowly and saying, “Okay…”) or at least don’t immediately dismiss what they said as an obvious joke.  On one recent such occasion, I said something like “No way, you’re just messing with me” about three times before finally politely acting as if I believed what my friend was saying… which of course turned out to be the opposite of the truth.  And I think that when I act gullible in this way, it comes across like I’m lacking in critical thinking, like I’ll accept whatever is put in front of me without considering how obviously absurd it is.  But what’s actually going on in my head in a way is almost the opposite: I realize the absurdity of the claim immediately and know right away that the person is most likely joking, but ideas creep in like ominous gadflies, providing half-formed, kinda-sorta plausible explanations for why they just might be telling the truth.  And it occurs to me that if those half-formed explanations are actually reality — however minutely low the probability seems at the moment — well then it would be totally rude of me to just dismiss them and automatically disbelieve the person, wouldn’t it?  They’re probably just screwing with me, but I’m not about to take the risk of assuming this when it might turn out that they’re serious.

I guess any epistemic behavior I’d like to see more of in the world, even something like open-mindedness, can be harmful if taken to an extreme.  I believe it was Bertrand Russell who said that one should keep one’s mind open, but not so wide open that one’s brains fall out.  And there is such a thing as having too much imagination.

Multivariate Utilitarianism

“You’re a rotten driver,” I protested.  “Either you ought to be more careful, or you oughtn’t to drive at all.”

“I am careful.”

“No, you’re not.”

“Well, other people are,” she said lightly.

“What’s that got to do with it?”

“They’ll keep out of my way,” she insisted.  “It takes two to make an accident.”

— from The Great Gatsby, by F. Scott Fitzgerald

[Content note: This is a clumsy explanation of an idea in progress, and I hope someday to turn it into something more polished.  I expect the gist of it has been developed in a more complete form in plenty of other places.  This is the longest post I’ve written here, and I don’t arrive at the main point until near the end of section III(C).  Some math (differential calculus) with explanations that can be skimmed over by those already very familiar with it.  I deliberately kept formulas and symbolic expressions almost to a minimum, partly because I still haven’t worked out how to import LaTex expressions into WordPress.  There’s an explanation near the end which could use a simple chart or diagram — if I figure out how, I might edit one in.]

I. Multiple-agent problems

 I want to write about my ideas on how to make moral judgments in situations where multiple agents are involved.  My goal is to try to put it in a rigorous framework, but I expect that this will be only a sort of rough draft.

I’ll start with an example.

I teach math classes to university students, and there are certain types of situations that too often come up between me and them.  I’ll describe one of the most dire incidents, which happened during my first semester teaching.  For some reason, the university where I was teaching at the time put the times but not the locations of the midterm exams online.  The location of the make-up midterm exam was given on the sign-up sheet, which I would pass around on several consecutive classes for the students who knew they wouldn’t be able to make the regular exam.  When passing around the sheet in class, I was careful to point out that they needed to copy down the location of the make-up exam, because they wouldn’t be able to find it anywhere online.  Now I also, of course, give my students my email and tell them they can write to me any time if there’s any kind of problem, and that I’ll try to answer as soon as I can.  And I try to read and respond to their messages in a timely manner as promised, but oftentimes I have around 150 students total, which means pretty frequent student emails, and I sometimes don’t get to one quickly enough.

So, it’s probably not hard to see where this is going.  I had a situation where with an hour to go before the make-up exam, a student emailed me to say that he didn’t know where it was.  I wasn’t near my laptop or checking my email during that particular hour, and as a result, the student missed the make-up exam.

Whenever something like this happens, even though common sense tells me that the student is largely to blame for not being responsible and following directions in the first place, part of me feels like it’s my fault because I failed to get to their email as quickly as I could have.  As I recall, in this situation, I felt bad enough that I made a special arrangement for the student to take the exam at another time in my office, and subsequently I certainly made sure to send an email to all of my students informing them of the locations of all exams a few days in advance.  Still, at the same time, it’s not quite fair to say that the incident was really “my fault”.

When contemplating situations like these, the conclusion I usually arrive at is that we both messed up, but that at least a great deal of the blame should fall on the student rather than on me.  This conclusion seems to comply with most people’s common sense of how moral responsibility works.  However, it’s quite not so trivial to pinpoint exactly what I mean by “messed up” or to rigorously defend why the student deserves more of the blame for having missed the exam.  The difficulty lies in the fact that the student and I are two independent agents, each of whose actions (or inactions) contributed to the unfortunate result.

When I say that we both “messed up”, it’s clear enough that I mean roughly the following: each of us, being mostly unable to influence the other’s actions, did something which resulted in a worse outcome than would have occurred if that thing had not been done.  The naive judgment to make is to place blame on anyone who “messes up” in that sense — that is, anyone who does something which brings about a worse result than if they hadn’t done it.  And indeed, this method of judgment makes a lot of sense if only one person’s actions brought about a negative consequence, but it falls apart as soon as there are two or more people’s actions in the equation.  It’s nonsensical to say that two people each individually carry full moral responsibility, yet a priori there’s no obvious way to divide up the responsibility between them.  (One of these days I’ll get through an entire post without invoking a phrase like “a priori”, but that day is not today.)  Yet, it seems like many of the people who are one of multiple agents in such a situation instinctively gravitate towards focusing on the other agent having messed up and conclude that the other party should be blamed (as I mentioned in my post on free will and politics, people are quick to assume others have free will while their own actions are determined).  This is essentially what the character Jordan from The Great Gatsby does in the quote above: any car accident that she gets into won’t be her fault, because the other driver would be guilty of failing to avoid it.

Yes, it often takes two to make an accident (or more than two), which can make moral judgments a lot less clear.

II. Calculus / ethics problems of one variable

A) Taking derivatives

One of the classes I taught during graduate school was a multivariate calculus course.  When teaching it, I started off almost every single lecture by recalling a concept from single-variable calculus which I was going to generalize to a situation with several variables.  I want to describe multiple-agent ethics problems in terms of multivariate calculus, and to do so, I think I’ll follow the same strategy by first describing the way I view a single-agent situation and how this can be interpreted as a concept in single-variable calculus.

The short explanation (and yes, originally I did try to write down a much more long-winded one) of single-variable derivatives is this: if you have a function of one independent variable x, then the derivative of that function at a particular value of x, written dy/dx, is the rate of increase of the dependent variable y when x starts at that value and begins to increase.  A classic example is the function y = x^2, which takes the input number (independent variable) x and squares it to get the output (dependent variable) y.  It can be shown using basic techniques of differential calculus that the derivative of this function at, let’s say x = 3, is 6.  This means that when you set x = 3 (which means that y = 3^2 = 9), then if you start to increase x at a certain rate, the dependent variable y will start to increase at 6 times that rate.  So if you add a very small increment to x = 3, let’s say 0.1 so that x increases to the value of 3.1, then y will increase by approximately 6 times that increment, which is 0.6.  In fact, when x is 3.1, y is 3.1^2 = 9.61, and given that y started out at exactly 9, we see that y increased by 0.61, which is pretty close to our estimate of 0.6.  Meanwhile, if instead you start with x = 3 and begin to decrease x at a certain rate, then y will begin to decrease at 6 times that rate.

Perhaps the most important thing to note here is that the derivative is a positive number when x = 3, which means that starting to increase x will cause y to begin increasing, while starting to decrease x will cause y to begin decreasing.  If we instead start at a very different value of x, say x = -2, then the derivative is a negative number (one can compute that it’s exactly -4), and starting to increase x will cause y to start decreasing, while starting to decrease x will cause y to start increasing.

For a real-life example, let’s imagine a very simplistic scenario where my level of happiness is entirely dependent on the amount of time I spend each day on exercise; that is, Happiness is a function of Time Exercising.  Suppose that I work out or get some form of exercise for one particular amount of time each day, little enough that I would benefit in terms of happiness if I were to increase the amount of time I spend working out.  Here the independent variable x is the amount of time in minutes I spend exercising each day; suppose that at the moment, x = 30.  Let’s assume that if I were to increase my amount of daily exercise time, my level of happiness measured in Happiness Units (the dependent variable y) would begin to increase at a rate of 5 Happiness Units per additional minute of exercise.  Then the derivative of the function whose independent variable x measures how long I spend exercising each day and whose dependent variable y measures how happy I am as a result of my exercise routine at x = 30, is 5.  If I go from 30 to 33 minutes of exercise per day, I calculate that my overall happiness will increase by roughly 3 x 5 = 15 Happiness Points.  (The way to write down the general formula for approximating change in y is “Δy ≈ dy/dx • Δx“.)

Again, the most important thing about this derivative from a practical perspective is that it’s positive: starting to increase the amount of exercise I do will make me happier, while starting to decrease it will make me less happy.  Of course, the rate of increase in Happiness Units per additional minute of exercise itself will change the more exercise I add to my routine, probably becoming less and less (a “decreasing rate of returns”), eventually reaching a point where I’m not benefiting emotionally at all by increasing my work-out time.  Beyond that, I may be at a point where I actually become more unhappy by increasing my exercise time, for instance, if my time at the gym is something ridiculous like 4 hours every day (x = 240).  But that’s not really relevant when thinking about the derivative where I am now, at x = 30, where clearly my exercise routine isn’t particularly excessive and working out more will still make me feel better.

B) Deriving ethical statements

So how does this relate to ethics?  Well, utilitarian ethics is all about making choices that maximize people’s overall well-being, or overall utility.  If we assume that my independent variable x (exercise time) is something I have complete control over, and that the main decision at hand is how to start adjusting x, and that “overall well-being” is essentially just the value of my dependent variable y (level of happiness)… well then the ethical problem of “How should I begin to adjust the time I allot for exercise?” boils down to looking at the derivative at my current x = 30.  In fact, if my choice at the moment is either “start to increase exercise” versus “start to decrease exercise”, then clearly the mere fact that my derivative is positive means that I ought to start to increase my exercise (because then my level of happiness will go up).

Of course, we could consider my level of happiness as a function of some other aspect of my lifestyle.  Say now my independent variable x measures how much TV I watch each day (in minutes), and the derivative where I’m at (suppose it’s x = 150) might be some negative number (say -8).  Well, that just tells me that I shouldn’t increase my TV time (because it would decrease my happiness), and in fact, that I ought to decrease my TV time (because that would increase my happiness).

Or for that matter, we could imagine another scenario where I’m considering x to be my exercise time once again, but now I’m working out for 240 minutes a day, and the derivative is negative, which similarly means that at that level, I ought to decrease my exercise time.

It’s pretty obvious and non-controversial how to give praise or assign blame in these one-variable situations.  My praise/blame-worthiness is proportional to the increase/decrease in utility (for simplicity, amount of utility = number of Happiness Units) that results from the change I make in x.  If the derivative for happiness-as-a-function-of-exercise-time is +5 at x = 30, and I increase x by some small (positive) increment Δx (say Δx = 3), then I deserve praise; the resulting increase in utility is roughly the derivative times Δx, which is 5 x 3 = 15.  If, on the other hand, I decrease it by 5 minutes (this is letting Δx = -5), then I deserve blame, corresponding to a decrease in utility of roughly 5 x 5 = 25.  If the derivative were 10 instead of 5, then utility would be decreased by roughly twice as much, and I would essentially be twice as blameworthy.

So if I have a positive derivative at the value of x where I am now, then I should start to move x in a positive direction (similarly for negative derivative moving in a negative direction), and my praiseworthiness in doing so is proportional to how large that derivative is.  And of course, a similar statement holds for a negative derivative and choosing to move in a positive direction, or vice versa, regarding blameworthiness.

This is all a very wordy and overly-involved way to state the obvious, and none of it should be controversial, but it helps to set up the somewhat more interesting two-variable situation I want to look at next.

TL;DR: Utilitarianism says that if utility is a function of an independent variable x which you control, then you should start to move x in the positive (or negative) direction if the derivative is positive (or negative).

III. Calculus / ethics problems of multiple variables

A) Two variables, same agent

All right, so above I was talking about situations where overall utility depends entirely on one parameter which a person has control over.  One might object that it doesn’t really make sense to imagine cases where there is only one parameter that can be moved, since in real life there are usually many conscious actions which result in a good or bad outcome.  Indeed, it would seem that the only way to make sense of such examples is to imagine that all other parameters are fixed and impossible to change.  I’ll come back to this idea later, but in any case, I want to consider a situation that better reflects the vast complexity of our actual universe by having many independent variables (okay, exactly two, which reflects vast complexity slightly better).

Suppose I now consider my overall happiness y as being dependent on both the time I spend on exercise (independent variable x, which we suppose at the moment equals 30) as well as on the time I spend watching TV (independent variable w, which we suppose at the moment equals 60).  I can adjust either of these independent variables however I choose, and I want to consider how my happiness is affected by adjusting either one.  If I consider my independent variables x and w jointly and contemplate gradually changing them, there are now many choices of how the way I can do this: for example, I can start increasing x while not changing w at all, or start to decrease w while not changing x at all, or start to increase both at the exact rate, or start to decrease w at twice the rate that I’m starting to increase x, etc.  Any such decision I make will cause my dependent variable y to begin increasing or decreasing at a certain rate.  The problem of determining the rate of change of y when I start to change x and w in a certain way is solved using something called a directional derivative, a standard concept in multivariate calculus.  (The problem of determining how to start changing x and w so as to maximize the rate of increase of y — as is surely our objective when y measures happiness — is solved by the technique of “moving in the direction of the gradient”.  But this is a needless complication and I’m going to sidestep the need to discuss it.)

B) Interlude on smoothness

This may look like I’m still vastly overthinking things: after all, shouldn’t I just independently consider the separate decisions of how to change x and how to change w, make one decision for each, and act on each of those decisions at the same time?  In some real-life situations, this would make sense.  It depends on the particular multivariate function we’re looking at.  If it is indeed the case — if the rate of change in the dependent variable y when I start to change my independent variables x and w in a certain way is determined by just separately considering what happens when I change x and what happens when I change w — then our multivariate function is said to be a smooth function.

Let’s assume for the moment that our model of happiness as a function of exercise time and TV time is a smooth function.  That would mean that all we need to know is two values: the rate of increase in my happiness when I start to increase my exercise time without changing TV time, and the rate of increase in my happiness when I start to increase my TV time without changing exercise time.  These two values are called the partial derivatives of the function with respect to x and with respect to w respectively; they are denoted ∂y/∂x and ∂y/∂w.  Let’s say that the partial derivative with respect to x is 5 (my happiness increases when I start to exercise more), and the partial derivative with respect to y is -3 (my happiness decreases when I start watching more TV).  Then if I make the decision to increase my exercise routine by some very small amount — say Δx = 1 — and at the same time also increase my TV time by a small amount — say Δw = 2 — then I can estimate the change in my happiness to be roughly

Δy ≈ ∂y/∂x • Δx + ∂y/∂• Δw = (5 x 1) + (-3 x 2) = -1.

So my happiness decreased by 1 Happiness Unit, which means that from a utilitarian perspective, I probably made a mildly bad decision.

It still seems pretty obvious and non-controversial that praise or blame should still be dealt accordingly in situations like this.  For instance, when I increase my exercise time by 1 minute and my TV time by 2 minutes (I guess I’m watching a few extra commercials or something?), by the above calculation, my happiness is decreasing by approximately 1 Happiness Unit, and my action is worthy of a small amount of blame.  If I were to, say, not change my exercise time but increase TV time by 3 minutes, then a similar calculation shows that now y decreases by 3 x 3 = 9, which means that this decision was also blameworthy and was in fact 9 times as blameworthy as the other one.

Unfortunately, both in mathematics and in reality, not all multivariate functions are smooth functions.  I can easily imagine it to be the case even with the example of happiness as a function of exercise time and TV time that the function is not smooth.  Suppose that, again, the rate of change of y when I start to increase x without changing w is 5 and the rate of change of y when I start to increase w without changing x is -3.  If we only needed to consider these partial derivatives separately in order to make our decision, it would be obvious that I ought to both increase my exercise time by some amount while decreasing my TV time by some amount.  But perhaps I like to watch TV as a much-needed way to cool off after working out, and I’m actually better off (or at least happier) if I increase both exercise time and TV time by the same amount, as opposed to exercising more while cutting down on TV.  That is, increasing x by 1 and increasing w by 2 results in y actually changing by a positive amount, not by -1 = (5 x 1) + (-3 x 2) as we predicted above.

C) Two variables, different agents

Okay, but the type of situation I started this post describing was not one where I have control over two parameters and have to determine in which directions to begin sliding them.  I was instead talking about the case of (at least) two separate agents who each have control over (at least) one parameter.  For instance, there may be two drivers, Mr. X and Ms. W, who each choose to drive at a certain speed at a particular moment (let’s call Mr. X’s speed x and Ms. W’s speed w), such that if either one of them goes just a bit faster right now, then there will be a collision which will do a lot of damage resulting in a decrease in utility (let’s again call this y).  At least naïvely, from the point of view of Mr. X, it doesn’t make sense in the heat of the moment to compute the optimal change in w as well as the optimal change in x, since he has no direct control over w.  He can only determine how to best adjust x, his own speed (the answer, by the way, is perhaps to decrease it or at least definitely not to increase it!), and apart from that all he can do is hope that Ms. W likewise acts responsibly with her speed w.

(I am of course making extreme simplifying assumptions here.  In many human interactions, it’s possible to at least indirectly influence the choices of others.  But here we are ignoring the possibility of this sort of “second-order” action where one can make a choice affecting someone else’s variable.)

So I guess all this is leading to my stating what should be fairly obvious: when two (or more) agents each control an independent variable — say the independent variables are x and w — then the one who controls x should decide how to change it based on the partial derivative with respect to x.  Again, the partial derivative with respect to x, written ∂y/∂x, is the rate at which y changes when you start to increase x but leave w the same.  If y represents utility, then our agent Mr. X should increase x if and only if ∂y/∂x is positive.  After all, he has no idea what Ms. W might do with w and can’t really do anything about it, so he should proceed with his calculations as though w is staying at its current value.

That’s what each agent should do.  I’ve said nothing about how much either of them is deserving of praise or blame in the outcome of their actions.  That’s an entirely distinct issue to consider, and a much more difficult one, at least if the function isn’t smooth.

If our function is smooth, then there is a straightforward way to apportion moral responsibility.  Since then the change in utility is roughly ∂y/∂x • Δx + ∂y/∂w • Δw, we can give one agent responsibility for the ∂y/∂x • Δx part and the other responsibility for the ∂y/∂w • Δw part and call it a day.  For instance, we can go back to our “happiness y is a function of exercise time x and TV time w” function, assuming that it’s smooth with ∂y/∂x = 5 and ∂y/∂w = -3 as before, but supposing that there are two separate agents: Mr. X, who controls my exercising, and Ms. W, who controls my TV-watching.  Then if, say, Mr. X increases x by 1 while Ms. W decreases w by 2, then Mr. X is praiseworthy for increasing my happiness by 5 x 1 = 5, while Ms. W is blameworthy for decreasing it by 3 x 2 = 6.  Our moral judgment of their actions is determined by our judgment of what each of them should have done.

But it becomes much less clear how to place moral judgment with non-smooth functions, like in the driving situation where x is the speed of the driver Mr. X, and w is the speed of the driver Ms. W, and y is overall utility which stays about the same if they avoid a collision but plunges if they do wind up hitting each other.  Since disaster could only have been averted by both of them choosing not to speed up, there’s just no canonical (obvious) way to assign blame.

Similarly, in the example from my own life that I started out with, just the fact that I should have answered my student’s e-mail, and my student should have just generally been much more responsible and organized.  Each of us controlled a function that had a partial derivative telling us to act differently than we did.  But as I said at the beginning, it seems that the student really deserves most of the blame.  It violates common sense — in fact, it’s not really even coherent — to say I must shoulder the full blame for the missed make-up exam because if I’d acted differently it wouldn’t have happened, while by a symmetric argument my student must have simultaneously earned the same blame.

D) Interlude on “is-versus-ought”

If we misunderstand beliefs about what is the best course of action for one agent in the midst of a multivariate situation to be pronouncements of moral judgment on us for whatever comes to pass, then we may find ourselves making choices based on what other agents ought to do rather than on what they are doing.  Either that, or we might find ourselves thinking like Jordan the rotten driver: we can do whatever we like, and if other agents fail to choose the best action, then any unfortunate consequences are clearly their fault and not ours.

Confusing the issue of what one ought to do when controlling only one variable in a complex situation with the issue of who deserves credit for the outcome of that situation seems to be a very, very frequent problem.  I see it as an element of many personal conflicts as well as in debates on pretty much every political issue out there.  It’s plainly present just about every time one hears something about how “so-and-so could have stopped this from happening” or any mention of “victim-blaming”.  Maybe sometime later I’ll write something that delves into one of these controversies and how this confusion is a major aspect of the bad argumentation surrounding it, but for now, I just want to stress how relevant I believe it is to many serious disputes.

There is a frequently-cited fallacy called “is-versus-ought”.  It takes many different forms, but in the context I see most often, it means that someone is objecting to claim that agent A should do thing X by pointing out that ethically speaking, agent A shouldn’t be required to do X.  This type of reasoning falls under the is-versus-ought fallacy because it seems to be confusing “A should do X” with “A ought to be required to do X”.  Perhaps we ought not to live in a world where A needs to do X, but for the time being, that’s the way our world is.  Anyway, I would point out that this is essentially a case of, or perhaps equivalent to, the widespread confusion I’ve been emphasizing.

IV. So how does one assign moral responsibility?

I hate to write such a long essay detailing how computing degrees of moral responsibility in real-life multivariate situations is more subtle than it may appear without actually proposing a solution as to how to actually determine moral responsibility.  I felt so sure that there must be a nice mathematical way to describe it, just as there was a nice mathematical way to describe which direction each agent in a multivariate situation should move in and how much.  But unfortunately, every time I’ve thought that I’d gotten the right idea and tried to write it down, it turned out either not to make coherent sense or not to really explain anything.

The most intriguing idea I’ve had is to consider not only the partial derivatives ∂y/∂x and ∂y/∂w themselves, but to consider how each partial derivative changes as the other variable starts to increase.  That is, I would be looking at the second-order partial derivatives ∂(∂y/∂x)/∂w and ∂(∂y/∂w)/∂x.  For a smooth function, these quantities are always equal by Clairaut’s Theorem, but as I’ve already established, in real life we’re often dealing with non-smooth functions.  The idea is something like, say, if by increasing x, it becomes much riskier to start increasing w (equivalently, ∂(∂y/∂w)/∂x is negative), and meanwhile w was increased, then maybe Mr. X deserves some blame for bringing about a situation for Ms. W where increasing w would more easily lead to harm.  If we switch the variables and find out that ∂(∂y/∂x)/∂w is closer to 0, then that would imply that Mr. X deserves much more of the blame.

This approach definitely appears to have some issues.  For instance, a lot of these situations are discrete (each variable can be set to either one value or another), and it’s a crude enough business just trying to estimate first-order partial derivatives, let alone second-order ones.  Oftentimes the outcomes look symmetric between the two variables.  The idea as I expressed it still isn’t clear regarding exactly what formula computes responsibility, and I’m not so sure of the best way to generalize it to three or more variables.

But still, in some examples, even discrete ones, the outcomes don’t really look symmetric and it may be reasonable to suppose that one second-order partial derivative is greater than the other.  I could almost argue this with the email example, but let me switch to a situation where it might be easier to see.  Suppose I carelessly leave my laptop alone in a public place, and somebody steals it.  Overall utility (which as usual we denote by y) is sharply decreased (at least, if we assume that it’s overall bad for an item to be stolen even though it benefits the thief).  Now this couldn’t happen without both an increase in my degree of laptop-guarding-carelessness (call it x) and an increase in the thief’s laptop-stealing behavior (call it w).  But consider this: if the would-be thief keeps their laptop-stealing behavior to a minimum, it actually increases utility for me to become more careless with laptop-guarding: it’s certainly less trouble for me if I don’t bother to keep it with me all the time.  Whereas if I keep my carelessness to a minimum, there is no change in utility due to a would-be thief deciding they want to steal it: it’s not going to get stolen either way.  It’s not unreasonable to conclude from this that ∂(∂y/∂x)/∂w < ∂(∂y/∂w)/∂x, and to link this asymmetry to the fact that the thief is more blameworthy than I am in the event that both x and w are increased.

On the other hand, I’ll have to think about it for a while longer before I can feel confident that this kind of explanation fully justifies our moral intuitions.  It might be better for now to assume that there is an explanation out there which is more virtue ethical in flavor, which says that things like stealing, or not following instructions for responsible student behavior, are just wrong, or at least wronger than leaving valuable things unguarded or failing to read emails within an hour of receiving them.

Either way, at least we know that there’s a clear-cut way to think about what we ought to do with our own variable even when there are other variables out there we can’t control.