A second-order confirmation bias

[Content note: This essay takes a while to get to the main point, and I’m not sure it doesn’t (almost) make more sense to start at the final section.  Also, I suspect my terminology involving “second-order” may not be the sharpest, and I’m open to suggestions there.]

I’m going to begin by diving right into a super current politically charged topic that one hears discussed quite a lot nowadays, although this is far from a motivating example for me and the idea I’m using it to lead into has been in my mind for a long time.  This particular political topic feels way overused, but the potential biases involved are so clear to demonstrate that I can’t resist.  Anyway, let’s just start there and get it over with.

So, how ’bout that Trump and his dealings with Russia.

I haven’t the remotest desire to attempt an analysis of how plausible the Russian-collusion allegations are.  There are a number of suspicious-looking circumstances (which from my view begin with the president’s refusal release his tax records), and a lot of liberals and/or opponents of President Trump have (unsurprisingly) been harping on them pretty relentlessly over the past year and a half.  Now going through and completely objectively evaluating the validity of these accusations is beyond the scope of my background — it’s even harder for real politics buffs and policy wonks than they like to admit, at least the “objectively” part — but I can say a little bit right off the bat about the biases that will push even the most “objective” experts in one direction or another.  And it almost goes without saying that there’s quite a massive bias pushing anti-Trumpers in the direction of believing all the speculations about illicit connections to Russia.  The more interesting question is how exactly to explain this bias.

On a very basic and naïve level, nobody prefers to believe that the president has committed a set of additional crimes, on top of all the other horrible things he’s said and done in the eyes of a Trump-hater (remember, there is a reason why they are a Trump-hater in the first place, or most likely, about 257 reasons).  Maybe it makes some of us feel a little soothed, in the wake of his election, to believe that he wasn’t legitimately elected in the first place, since that further bolsters the idea that the American people as a whole didn’t want him and that does feel slightly encouraging.  But in the big picture, we can leave that aside: there’s little to no direct comfort in believing such alarming speculations.  Things are bad enough already; who wants to believe that the most powerful man in the country (if not the world) is even worse?

A source for the bias which I do think is obviously present here is what I want to call “inductive bias”.  (I know that’s a term with a technical meaning which may or may not be compatible with the way I’m going to use it in this essay.)  It’s part of natural human instinct to predict things about our universe based on the patterns we’ve observed so far — this is the law of induction that our scientific method depends upon, for instance.  In particular, we build models of particular people based on behaviors we’ve already observed from them and then use those models to evaluate the likelihood that they will do some new thing.  Then it’s possible for this mechanism to be a little overactive and lead us to possibly put too much weight on the model we’ve constructed from our past observations.  We’ve already seen Mr. Trump do and say a plethora of dishonest and disgusting things, and we’ve used that data to build a model of him as a dishonest and disgusting person.  If we take this too far, it may lead us to automatically believe (or be overly biased in favor of believing) whatever rumors coming our way describing any additional dishonest and disgusting thing he may have done.

But personal experience of listening to people who rant about Trump’s involvement with Russia tells me that this definitely isn’t the whole story.  There’s something else going on here when these ranters pepper their rants with words like “treason” and phrases like “illegitimate election” and “automatic grounds for impeachment”.  This something else is especially apparent when many of those same ranters believed that the 2016 election was already illegitimate and that 257 other things Trump did were already automatic grounds for impeachment.

The fact is that even the most vehement Trump-hater realizes on some level that there’s at least a tiny bit of room for disagreement on exactly what to do about the guy and his appalling behavior.  We all get the idea that there will be some resistance and pushback and deployment of counterarguments when the administration is confronted with calls for impreachment on the grounds of everything minus the Russia stuff.  And as long as those counterarguments are out there, deep down somewhere in the heart of even the most outspoken anti-Trumper there will be that little nagging doubt that maybe we can’t immediately conclude that automatic impeachment is warranted, that maybe we have to do a little more work on the courtroom floor to argue in a fully objective manner that our president is not fit to bear his title.

So it’s easy for us to tell ourselves that of course all that Russia stuff is true, so now we really have a silver bullet to effect an impeachment and (hopefully) a return to normalcy where our country is no longer being run by a sociopathic buffoon and avoid having to argue over the only ever-so-slightly more subtle outrages that have already been proven.  That silver bullet may still fail to work somehow, but at least it’s an ever-so-slightly better bet than we have now.

That’s the strong vibe I get from most of the voices out there which seem more sure than I am of Trump’s shady ties with Russia.  A few of them, by the way, even go so far as to express confidence that Vice-President Pence has always been in on this as well and therefore is also quite impeachable.  Given that on the larger scale we haven’t seen much in the way of speculation or actual evidence for Mr. Pence being involved in these affairs, it’s hard for me not to interpret this as a reflection of hating Pence and his positions and therefore wanting a ready-made rationale for getting rid of him along with Trump that avoids the need to win a war of ideas against said hated positions.

A variant of this is seen in the movement to remove Trump from office by proving mental deterioration through the word of dozens of mental health professionals who have analyzed him from afar.  Here this approach doesn’t even rely on the model of Trump as a despicable person or succeed in strengthening it — if anything, a byproduct is that it will absolve him from some of the blame (“he’s really become confused in his old age”).  But, if as easy to implement as claimed, it would have the same effect of kicking the man out relatively quickly and cleanly, which can make it very tempting to agree with the arguments purportedly demonstrating Alzheimer’s.

To sum up a little broadly, people will go a long way to save themselves the work of having to justify their opinions to the world in order to bring other people around to their side (and hence bring about much-needed action).

This relates, perhaps a little vaguely, to other fallacies I’ve written about on here where people are in my view seem too quick to embrace the strongest beliefs at one side of the Overton Window which, when assumed true, make it easier to reach whatever conclusion they were going to reach anyway.  I’m especially reminded of when I wrote about not drafting all arguments as soldiers; I introduced my main point as follows:

One major thrust of the rationalist approach to winning arguments is to avoid the “arguments are soldiers” mentality — that is, the attitude that every argument for one’s side of a debate, whether good or bad, is an ideological weapon and all must be deployed if one is to win on the political battlefield.  The argument against using arguments as weapons is itself a call for separating the object from the meta, but I see another objection: namely, that the use of “arguments as soldiers” oftentimes implicitly weakens the good arguments for one’s own side.

One example I gave was of a common atheist debating tactic of emphasizing the number of people killed as a result of religion:

I’ve actually seen Richard Dawkins open a debate on the existence of God with this strategy, then backtrack when he sees his debate opponent is formidable at rebutting that point, saying, “But counting up the number of lives lost due to a particular ideology doesn’t really matter anyway; all I care about is which belief system is true!”  […]  Well then, Dr. Dawkins, why didn’t you start by arguing that way in the first place?  In this failed rhetorical maneuver, Dawkins has actually damaged the argument against religion as being antithetical to the objective pursuit of truth by implicitly making this point of view seem delicate, as thought it needed to be backed up by statistics on the number of deaths resulting from the failure to choose secularism.

At the time I wrote that, I was focusing on the detrimental effect on one’s own side that comes from erring in this direction, but now I want to investigate why we tend to err this way in the first place.  I don’t think it’s fully explained by what I called “inductive bias” (“Religion already seems really immoral and untrustworthy; I bet on top of that it’s responsible for more deaths than atheism!”).  I think that the example of the debater in the quoted paragraph above lends illuminates a deeper explanation: taking stances that more blatantly support your side (e.g. religious belief systems are responsible for the most deaths) make it easier to win debates.  Except that here I want to expand the notion of debate-winning to include debates — really just investigations of the truth — that take place internally, within each of us.

I see vaguer connections to other common tendencies in how we choose what to believe, from the just world fallacy to jumping to the worst conclusions about famous long-dead figures (apart from our attraction to sensationalism, it’s sort of easier to plop personages like Lewis Carroll and Walt Disney into boxes marked “toxic” than to actually be faced with the task of analyzing some of their actions using shades of gray).  But dwelling on the relation with these other fallacies would cause me to drift away from the one I originally wanted to talk about.  And it’s something I think I first nailed down a few years ago from observing how I reasoned in a very simple, everyday, mundane situation.

I have allergic rhinitis, which basically means that at any moment I’m prone to abruptly feeling stuffy and ticklish (to myself I actually call use the word “itchy”) up in my nose and having sneezing fits for no reason at all.  A lot of people have trouble with the “at any moment” and “for no reason at all” parts.  I might follow up a bout of sneezing with “Sorry, my allergies are really bad at the moment,” and the reaction will typically be either a sympathetic “I know, pollen season sets a lot of people off” / some other remark referencing atmospheric conditions at that time of year, or a confused “Really? But it’s not allergy season at all!”  Apparently not that many people have firsthand experience with allergies that are completely indifferent to factors like time of year.  Mine don’t even seem to care where I am or what I’m doing at any given moment — I’m about as likely to get caught in a sneezing fit in the shower as I am when outside cycling.

The interesting thing is that for a period of several years, not too long ago, my go-to response to questions about what was setting off my allergies was “Oh, this happens whenever the weather changes.”  And I said this not just to avoid having to explain the fickle nature of allergic rhinitis, but because I believed it.  After asking myself over and over whether there were any factors that seemed correlated with my allergies spiking, I had become fairly convinced that the one common factor was changes in temperature and maybe in atmospheric pressure as well.

Then one day, it finally occurred to me properly that where I was living at the time, the temperature was always changing.  As in, every single week the weather forecast would show a fairly significant increase or decrease in degrees Fahrenheit — or maybe more than one temperature swing in the same week — and both increases or decreases each occurred not according to the changing of the seasons but in any month of the year.  (By the way, nowadays I live in a climate with much steadier day-to-day weather trends, and my allergies haven’t improved in the slightest.)

I had always more or less been conscious of this, but I’d been avoiding fully considering it in connection to my allergies.  Why?  Because I had been just that desperate to be able to explain them in terms of some external stimulus that I’d latched onto the weather explanation without questioning it in a rational manner.  Even though it wouldn’t have helped me weaken my allergies, even though in fact this model just made them dependent on something else outside my control (short of moving to another place with slower-changing weather), somehow it just would have felt nice to be able to predict what would happen a little more easily.

I’ve noticed this mentality in a lot of other people, particularly regarding a lot of other health issues, and it’s not hard for me to come up with an explanation.  We humans want to be able to make some sense of the chaos.  We want some control in the form of an algorithm to follow, or at least some degree of predictability in our lives.  So we grab onto fad diets and universal lifehacks (even unpleasant ones, as long as they’re straightforward!) in order to experience the feeling of promise that following some instructions will allow us to solve a problem that seemed inscrutable.  We’ll eagerly believe in conspiracy theories as well, if they seem to make sense out of events that were senseless and scared us in how abruptly they struck out of nowhere.  We humans will die a thousand deaths before having to acknowledge to ourselves the incredibly random and uncontrolled nature of our reality, particular when it comes to such unfathomably complex phenomena as human health and events.

I think this is what’s mainly behind the kinds of beliefs I was criticizing earlier, even more than the allure of believing what’s consistent with an already-formed opinion.  The bias we’re trying to pin down is more about the illusion of control than it is about believing what we want to believe or what best fits a model.

I view confirmation bias as belonging at the top of the List of Biases in the Rationality Handbook.  Everything I’ve been talking about today could be viewed as a form of confirmation bias, I suppose, so why am I spending over 3,000 words on it?

The quintessential form of confirmation bias is the human propensity to believe in whatever one finds most comforting.  Our universal temptation towards this error is quite overt and can’t be denied.  It was one of the first fallacies I was warned against growing up, and it played a prominent role in my first understanding of the word “rationalism”: a rationalist was someone who made a point of believing things on objective evidence, which in particular meant not believing things because they’re comforting (as that seemed to be most often the reason not to follow objective evidence).  The cliché example of course was the components of traditional religion such as believing in an afterlife or that every atrocity is in accordance with the plans of an omnibenevolent God, but more mundane assumptions in favor of what’s pleasant to believe were clearly happening on a daily basis.

Fortunately there’s a pretty widespread awareness of and caution against this brand of irrationality, but I’m afraid all of this emphasis on such a Avoiding Confirmation Bias 101 skill is obscuring a much sneakier form of confirmation bias, where the mistake lies not in being misled by what seems comforting but by what seems to provide easier algorithms for arriving at the right answer.  In a way, the latter type of belief is still a form of comfort, because it’s nice to feel that the right answer is more certain or easier to reach, to feel more in control.  But I regard it as still quite a different form of comfort — a second-order comfort, if you will.

And this distinction, in my opinion, can’t be overemphasized, because frequently such second-order comforting beliefs are directly at odds with the basic, more obvious ground-level comforting belief.  At the ground level, nobody feels better by believing that the leader of the free world is even worse than we’ve already seen or that much-loved historical celebrities such as Walt Disney actually stood for horrible things.  Those who fall prey to the second-order bias are often directly defying the first-order version in doing so by insisting on beliefs that are the stark opposite of what’s comforting.  And ironically, this makes it harder for them to see the error in their thinking: the awareness of the first-order “believe whatever makes you feel better” bias is so strong that they can counter whomever argues with them by rounding off the opponents’ position to simply burying one’s head in the sand (“Sure, you just won’t open your eyes enough to see that our president would commit treason because that feels less scary to you!”).

Much as I’m a little embarrassed to admit this, I actually skimmed through lists of fallacies and biases on Wikipedia and couldn’t find one that directly treats the thing I’m attacking in this blog post.  I couldn’t even find one that specifically treats the more basic “believe in what’s comforting” bias, though I suppose this already widely regarded as the most common form of confirmation bias.  I’m therefore, for the moment, adopting the terms “first-order confirmation bias” and “second-order confirmation bias” to refer to the better-recognized fallacy and the focus of this essay respectively, that is unless someone suggests better terminology to me (as has happened several times in the past).

At the end of the day, we’re all influenced by that subconscious instinct to create less work for our brains when it comes to deducing the truth.  That’s the cause of many fallacies and, as I’ve argued before, the ultimate reason we insist on ignoring gadflies.  But nowhere is this susceptibility to deductive laziness more obvious than in our propensity towards believing whatever will give us a clearer and more straightforward algorithm for going forward.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s