Confessions of a helpless epistemon

[Content note: Staying in the realm of useless abstractness and away from concrete examples.  This is the longest I’ve ever had a substantial unfinished draft lying around — around four months! — which is a record I hope will never be reached again on this blog.  Hopefully the tone and focus hasn’t been too badly diluted by this gap.]

I was so pleased when I first encountered the term “epistemic helplessness”, because it described my own mental state so much of the time.

Feeling misunderstood is a hallmark of the emotional experience of the typical teenager, but probably most humans at any stage of life experience this emotion regarding some aspect of who they are.  For me, I oftentimes think that the main element behind what routinely causes other people to not relate to me comes down to a state of epistemic helplessness which is fairly dominant in my perception of the outside world.

But before I get into the misunderstandings which I attribute to this uncomfortable epistemological state, I should first try to convey what “epistemic helplessness” means when I apply the description to myself, or at least what it feels like from the inside.

To put it rather vaguely, I feel that I like in a world of uncertainty.  Now I’m pretty sure* that everyone feels that way to some degree.  But the longer I’ve lived, the stronger an impression I’ve gotten that relative to most of the people around me, I’m quite uncertain of empirical facts.  Not just simple statements about the physical world, but also (and especially!) assessments of complex constructs and situations involving many humans, such as political events.  On any given topic, there are about a hundred other (pretty easily visible) people, clearly far more knowledgeable in that arena than I’ll ever be, who are arguing from multiple sides of it.

And when I say “uncertainty”, I don’t mean it in the sense of being inclined towards assigning probabilities to things and updating based on incoming evidence — what some call Bayesianism — rather than leaning towards believing something is absolutely true or absolutely false.  If anything, my Bayesian mindset makes me feeling more epistemically empowered; it certainly doesn’t constitute any mental state that I would call “helplessness”.  No, I mean “uncertainty” as in often having no idea where to begin in assessing a situation, because no matter how much the evidence seems to suggest one outcome (or probabilities assigned to a set of possible outcomes), life is always complicated enough that there’s sure** to be something else I haven’t considered.

I suffer from too much imagination, and that clearly lies at the root of this issue.  Part of the definition of maturity is having respect for how complicated the world we live in really is.  But an excess of respect for this complexity can be paralyzing.  It means that no matter how much evidence you see for some involved idea or stance, no matter how many reasonable-sounding arguments are made, all of that justification amounts to almost zero in the vast sea of uncertainty made up of so many vague might-be’s coming from all directions.  To make things worse, there’s a certain laziness ingrained in my intellect where I tend not to be well researched on most topics I find myself discussing — this is something to do with the fact that I’m interested in a lot of things but only seem able to muster Interest in a seemingly arbitrary subset of them.  The result is that I’m often much less knowledgeable than my acquaintances about whatever issue we’re debating.  All in all, sometimes the urge to just throw up one’s hands and give up the search for truth (or the quest to get as close as possible to it) is irresistible.

So when all is said and done, an overload of respect for complexity probably doesn’t lead to the most sophisticated approach to analyzing our universe.  And of course, from the outside, some of the rhetorical behavior that results from it is hard to tell apart from naïvité and a lack of respect for complexity.

* Yes, the turn of phrase “I’m pretty sure” seems to contradict what I just said in the previous sentence, but it so happens that I do have relatively reasonable levels of confidence in my understanding of how other individuals function; see below.
** And yes, the start of this sentence makes me sound awfully uncharacteristically confident as well, but clearly certain meta-level propositions are exempt from my perpetual tendency towards uncertainty; this is definitely apparent below.

I strongly suspect that a lot of those people with whom I’ve found myself discussing some contentious issue or question quietly carry an impression of me as excessively timid and mild in my opinions, or trying to disagree equally with everyone, or even as somewhat “radically centrist“.  True, I’ve already admitted on this blog that I did have tendencies towards “radical centrism” at some earlier times in my life (by the way, I thought I was making up that phrase at the time I wrote that post — add another to the list of terms that already existed at the time I “invented” them).  But while that flaw isn’t entirely unrelated, I am not typically trying to lean towards the most central or even the most neutral position on any current topic of discussion.  Instead, I’m leaning towards no position, because in the world as I see it there is surely a possible counterpoint to every point being put forth.

But the misconception of my thought processes is probably worsened by the fact that I often come off as rather lazy in backing up my claims that there might be other sides to some issue.  This is because my doubts are often based not in concrete speculations but in vague suspicions that “this kind of thing” always has other possible explanations.  So the scene isn’t necessarily one where I’m playing the role of a gadfly poking and prodding with actual concrete suggestions, but one where I’m despondently deflecting other people’s assertions with vague, almost trollish-sounding remarks like “well it depends on who you ask” and “there are multiple sides to every story” and “there are always more obstacles for someone in that position than we might think”.  For instance, I oftentimes have an attitude like this in the face of erupting scandals over the actions or inactions of persons in powerful-looking positions — I have a tendency to reserve my judgment on the basis of “I have a feeling there’s a lot more dry bureaucracy involved for that individual than we know about, having no idea what their day-to-day program is actually like”.  And it can be hard to make others understand that at least in theory I’m addressing things this way out of a profounder sense of uncertainty than what they seem to routinely experience, rather than out of an excess of charity or just plain (slightly obnoxious) laziness.

A further irony is that at least half the time, my opinions aren’t interpreted as stubbornly noncommittal but rather the opposite: it’s assumed that deep down I must have a strong opinion one way or the other, and since I seem skeptical of the confident views being put in front of me, I must be taking the other side of the debate.  In these cases I suppose that not only does my level of uncertainty come across as overly lazy, but that it comes across as so unbelievably lazy that it can’t be genuine: I’m obviously just being coy about the fact that I positively disagree.  I’ve heard a lot of complaints about having one’s opinion shoehorned by others via a false dichotomy, but I’m often still taken aback at how inconceivable to some people it apparently is that my opinion could ever be one of genuine agnosticism.

A number of people through the course of my life have been concerned about how often and strongly I seem to rely on majority opinion.  I can’t tell you how many times it’s been pointed out to me that I can’t base my tentative conclusions (which aren’t always properly recognized as tentative; see above) on “what other people think”, because Other People is obviously a very fallible source of factual beliefs.  This criticism is of course only a minor variant on the classic “If everyone jumped off a bridge, would you jump off a bridge too?” argument.  In my opinion there’s a pretty obvious rebuttal to the typical bridge-jumping objection, but I prefer to lay out the initial response that usually goes through my head (despite it being less precise and more on the emotional side) whenever I’m warned about the dangers of citing majority opinion.

There is a famous quote from science fiction writer and skeptic Isaac Asimov that I always think of in this context, although the analogy required will be a rather loose one:

“Don’t you believe in flying saucers?” they ask me.  “Don’t you believe in telepathy? – in ancient astronauts? – in the Bermuda triangle? – in life after death?”

No, I reply. No, no, no, no, and again no.

One person recently, goaded into desperation by the litany of unrelieved negation, burst out “Don’t you believe in anything?

“Yes,” I said. “I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I’ll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be.”

A lot of the time I feel sort of like Mr. Asimov being pestered to believe in a bunch of controversial phenomena.  Only instead of supernatural or paranormal things, I’m being bombarded with claims that sound scientific or somehow common-sense, along with arguments for them that sound perfectly reasonable on first hearing.  “Don’t you believe in my amateur understanding of the nutritious value of ingredient X?  On my sensible-sounding but simplistic argument on the dangers of activity Y to one’s health?  On my elementary mathematical demonstration of why economic policy Z is a good idea?”  Or even occasionally, “Don’t you believe my factual claim based on evidence provided by that specific scientific authority whose paper I can refer you to (in a field where the experts constantly disagree and the consensus changes every 10 years)?”

And my response basically boils down to “No, no, no, no.”  Or more specifically, “There’s always another side we’re not thinking of right now or an aspect we just can’t understand from our position; there’s always another expert who disagrees; etc.”

One can imagine my interlocutor throwing their hands up in exasperation at this point and crying, “Don’t you believe in some kind of claim using some kind of justification?  Or are you just infinitely skeptical about everything?”

And my answer is, “Yes!  I believe in evidence.  But not necessarily everything that you might consider good evidence.  I mean evidence that appeals to both the overly-imaginative and the lazy intellects.  Evidence that makes some assertion look significantly more likely regardless of how many additional wrinkles to the situation we may not be considering.  Evidence that shines out as a truly unambiguous beacon in my very foggy world.”

For me, majority opinion is serious concrete evidence of this kind.  It is not absolute proof.  The majority can be wrong.  My perception of what actually is the majority belief is also prone to error, especially because my view of the majority often doesn’t extend beyond my local bubble.  But a lot of the time, what Other People think is a far stronger indicator that a certain assertion is at least worth seriously considering than one person’s plausible-but-overly-simple-looking explanation of it.

Think of it this way: my concern is that I can’t understand most issues well enough to properly evaluate all sides of them and that I can’t trust most other individuals to understand them well enough for this either, no matter how confidently they claim to.  But if a large collection of other people, each with their own knowledge of various aspects of the problem, generally takes a particular side?  Now we’re essentially looking at an average over many outcomes, which is clearly more statistically significant than a the outcome of a single experiment whose degree of uncertainty appears rather high.

The importance I put on what other people think is complemented, perhaps, by a fair amount of confidence in how other people function.

I can’t fully explain or justify this, but I think it’s inherent both in how I navigate real life and in how I argue in my persuasive writing.  On this blog, for instance, when I’m not framing and justifying the points I make in terms of dry abstract logic, I’m generally appealing to some striking impression of human behavior, in particular to provide explanations for why we humans make the rhetorical mistakes that we make.  My whole thing about free-will explanations versus deterministic ones (on which I intend to expand soon with a series of other posts) was pretty much entirely an appeal to this sensibility, once I got past the pure philosophizing on the problem of free will.

I call this my “social sense”.  It’s largely built on my commitment to empathy and increasing experience with other people, and therefore my confidence in it is only getting stronger the older I get.  I’d say that “social sense” appears to have a pretty good track record with me, but that’s probably largely an illusion arising from an ability to re-frame things in accord with my already-existing convictions on how humans act, rather than from any tested predicative power.

Maybe I can try to explain this confidence in “social sense” as follows.  There’s a common misconception that emotions are inherently irrational, so to be as rational as possible we have to minimize our reliance on emotion, and so on.  I call this a misconception because I believe that emotions can be harnessed to work in tandem with cold logic, since ultimately, the feelings we experience are a natural function provided by the process of evolution which provides us helpful instincts about sentient behavior happening around us.  I want to suggest that, in the same vein, even if I’m tremendously fallible in understanding abstract philosophical or scientific models and considering all their possible alternatives, at least the “social sense” I’m equipped with through biology is reasonably likely to point me towards truth concerning the behavior of my fellow humans.

And this at least gives me a small weapon for skepticism about all the factual claims that fly my way: sometimes I can say, “Regardless of how valid your logic sounds right now, this kind of justification is often a particular trap that people fall into because of X, Y, and Z, and therefore I find it suspect.”  I don’t claim that it’s really much of a weapon, but I’ll take what I can get.

I apologize for all this vague and blathering exposition of my experience of epistemological helplessness.  But to be coherent and rigorous would almost certainly open me up to all kinds of plausible-sounding counterarguments, and after all, how much can I ever feel really sure of?


My evolving views of (American) politics

(a journey in epistemic helplessness)

[Content note: throwback to my first two posts, published this month last year.  On politics but I don’t know enough politics to make a “political” post per se.  A few issues listed in tags.]

First, a “meta” note.  I’m pleased that I got some substantial ideas down in writing here last year, however imperfectly, but I feel that I went very slightly astray of what I originally envisioned for this blog.  Therefore, I’ve made a resolution to steer my writing in a direction away from posts on mostly-impersonal abstract rationality concepts and towards posts on more concrete and personal issues.

The primary purpose for me in writing essays for this blog has always boiled down to something akin to self-therapy, as I tried to make clear from the start.  I think I succeeded in this at the beginning, but eventually my focus got slightly bogged down elsewhere.  I don’t regret focusing on the ideas I tried to express here last year, since it felt necessary to give myself a framework to explain why I think in the way that I do.  However, I’m beginning to wince at how many of my previous essays read like long-winded cerebral wanderings through subtle abstract questions with so much talk of “rationalism this” and “rationalist community that”.  It was never my intention to sketch out a dingy addendum to Yudkowsky’s Sequences.

It should be understood that my “rationalisty” essays aren’t meant to be persuasive in the sense of arguing that my approach to certain questions is objectively the best one; they’re instead meant to describe the way my mind works.  What I’ve ultimately wanted to do all along is jot down in writing the feelings and perceptions that guide my current approach to getting through my life both socially and on a more epistemic level (much of which, clearly, is tied in with rationalism).  And of course, I should feel free to go through with this kind of jotting-down even if I’m afraid what I have to say comprises ideas that are poorly defined, obviously incomplete, or even very likely invalid.  That way, I can more easily analyze my own beliefs, and with any luck, a few other people with whom these questions resonate with can analyze them as well.

Eventually I want to lay out some content that is way more personal.  (I feel like my writing flows more easily and less effortfully when I get a little more personal and less lofty anyway, but we’ll see.)  There are issues that I feel uneasy talking about that I already find myself putting off even though I’ve laid out most of the necessary framework (hopefully stating this intention now will help me to eventually follow through on it).  The evolution won’t be sudden or drastic — for instance, there’s definitely one more essay of the long-winded, cerebral, “rationalisty” type that I want to write here — but as I said, I’m starting to consciously push in the direction of personal stories and rants… And that begins with this entry, a throwback to the essay I wrote a year ago on the evolution of my attitudes through different periods of my life.  (And I’m not going to quit explicitly tying everything into rationalism just yet.)

Before I get started, I have to make it clear that this is a far cry from what anyone could call a “political article”.  This blog could never be a political blog, because, to put it bluntly, I’m somewhat of a political ignoramus relative to the writers who run such blogs, or even compared to many people of similar intelligence to me but no formal expertise in public policy.  I would love to understand more about macroeconomics, environmental policy, our electoral system, world history and events, and many other things, and I believe I have the intellectual capacity to do so (especially the more mathematical areas among these).  Yet somehow I’ve never managed to muster the necessary focus.  Clearly I’m interested in politics as a whole and many political issues (as evidenced by frequent references on this blog), but this is apparently an interest versus Interest thing.  I am, on the other hand, quite engaged with gaining an intuition for a lot on the main characters on the political stage and their personalities as well as the broad mentalities guiding the supporters of particular policies or entire parties.  This is not ideal but so far has been my best guide to understanding concrete issues and feeling sure of which sides of them to fight for.  So my journey is not one of direct understanding but of groping around trying to understand the convictions of others and why they feel as they do.

I. My apolitical beginnings

As most readers have probably figured out by now, I grew up in America.  I also grew up in a fairly liberal household, politically speaking.  I remember my first explanation of the difference between America’s two major parties being, “It’s all very complicated, but the Democrats often try to make poor people richer, while the Republicans often try to make rich people richer.”  As I got older, I asked more questions and learned that the Republicans favored tax cuts for the wealthy and tended to favor fewer restrictions on pollution by big businesses despite what scientific evidence was telling us, while Democrats stood up for things like a woman’s right to choose what she does with her own body and increased funding for education and the arts.

At that fairly young age, my skeptical thinking skills had not yet caught up to my innate “believe sufficiently developed-sounding narratives put in front of me” tendency that I’ve alluded to before.  So it’s no surprise that I didn’t subject my initial Democratic sympathies to much critical thinking.

(Warning: if you’re expecting a gripping saga detailing how I swung from this spot on the political spectrum to authoritarian populism, then the alt-right, finally stopping to rest at anarcho-communism or something like that, then you’re in for a disappointment.  Spoiler alert: I’m still mostly sympathetic to the Left.)

The first time there was a political story I really followed was the election of 2000.  At the time I couldn’t really understand how a system which allowed someone to win the popular vote but lose the election could possibly be justified, and the whole thing seemed like a ridiculous mess.  Then the September 11th attacks happened, which finally triggered a habit of following the news regularly.  I remember feeling some sense of loyalty to our then-fairly-new Republican president in the immediate aftermath, which eventually eroded as he for some reason pushed us into Iraq (I didn’t like war, and it seemed that he rushed us into it without obtaining an appropriate amount of evidence, but of course my views were still being colored by those closest to me who were pretty anti-Bush).

Then in high school I began to examine the political climate in America much more closely and critically.  In last year’s post “My Evolving View of Rationalism”, I expressed a belief that most people first form their personal worldviews in high school.  This includes position on the political spectrum (not based on what one is told by parents, etc. but deeply-held beliefs arising from honest questioning).  I remember one defining moment in American History class when I felt this happening to me.  We were watching some video about I-don’t-remember-what, and (I think) a very wealthy CEO was asked whether he considered himself greedy.  He denied it, explaining that he had created thousands of jobs and claiming that he had done more to help the world than Mother Theresa.  I was stunned, not because the man was obviously kind of a jerk (I was expecting that anyway), but because it properly occurred to me for the first time that putting more money towards big businesses might actually help the poor in some way.  Prior to that, I had never made a genuine effort to examine why so many people were in favor of tax cuts for the rich or for big businesses.  I guess I’d been leaning on the assumption that fiscal conservatives were either rich themselves or uneducated (never mind the fact that the most conservative guy I knew growing up was on reduced lunches and had a parent in academia).

It was at around the same time that I began to actually care a lot about religion and why people believed in it in contrast to my earlier religion-is-silly-and-boring stance, as I’ve described elsewhere.  And I put two and two together and realized that religion was playing a major role in politics, and that in fact the stronger sort of religion that I was especially philosophically opposed to was being embraced by the Republican party.  (Another even more significant defining moment I remember that history class is arguing with my mildly conservative teacher over same-sex marriage, when it really hit me that religious belief could lead to moral values that I couldn’t relate to at all and that these could be used to decide moral policy.)  So at around the same time I was realizing that there were other sides to the whole fiscal policy debate, my support for social liberalism was beginning to solidify.  But I remained for the time being not especially outspoken overall when it came to politics.

And then, we entered another presidential election season.

II. How America could have done better in 2004

By 2004, I had cemented myself into a certain political mould, as had many of my high school peers.  Mid-to-late-adolescence, after all, is a period of radical beliefs for many.  I was surrounded by radical Marxists, radical libertarians, radical Christian conservatives, radical anti-Zionists… so what type of radical was I?  Well, by now my budding rationalist sensibilities had instilled in me a distrust of any political ideology that claimed extreme answers to all problems, so I was determined to stay as far away as possible from the periphery of the space of political positions and maintain an openly critical attitude of everyone’s positions.  Of course, what I didn’t have the maturity to see then was that I was being at least as blindly ideological as anyone else — in fact I was essentially masquerading as a radical Centrist.  I still knew that I held a number of partisan positions deep down, but bent over backwards trying not to acknowledge them (some of this was out of a healthy concern that I might be biased towards my parents’ beliefs.  And once again, this was paralleled by how I chose to present my religious views.  I identified as agnostic, which I often defended as the most moderate, open-minded view.  But in retrospect, I was a rather militant agnostic — granted, I still am somewhat — and my attempts to dole out equal criticism to theistic religion and to straight-up atheism were pretty silly.)

And so I was no fan of George W. Bush, but when John Kerry first emerged as Democratic frontrunner, I was determined to conclude that he was probably almost as bad, despite having heard very little of what he had to say.  Then they debated, and my attitude towards him, and the whole electoral contest for that matter, changed completely.

I should back up for a moment and explain one aspect of philosophy that I was very passionate about at the time.  I had become a great follower of what one might pejoratively call “scientism“.  In other words, I valued the scientific method very highly and regarded a general version of it as the best means to reaching empirical truth.  This was the very cornerstone of my philosophical worldview and my brand of rationalism at the time.  I think what spoke to me particularly emphatically was the idea of keeping one’s mind open to all possibilities and then putting them through very rigorous testing — what Carl Sagan called “a marriage of skepticism and wonder” — which required the ability to recognize and admit one’s own mistakes.  It implied a system of self-correction which I considered to be a very beautiful concept.

I had made the connection that the American constitution was an embodiment of a similar concept (very revolutionary for its time): an system of laws which evolved through acceptance of new ideas, testing them by running them past the people; and accordant self-correction.  Of course this was only an ideal and the American government didn’t quite work this way in practice.  But the way I saw it, America was founded upon this principle, the same great principle that governed scientific research, the same concept that separated open-minded rationality from blind dogmatism.  During those years many people were arguing over what it meant to love one’s country in the midst of a war that many of its citizens didn’t support.  I knew where I stood: I loved America regardless of the decisions its politicians made, because its abstract defining ideals formed the very foundation of my creed.  And nothing was more un-American than defending whatever America did on a principle of “my country, right or wrong”.

The final weeks of the 2004 campaign season, and particularly the presidential debates, reshaped my ideas of where each major side of the current political spectrum stood with respect to my most deeply-held epistemic conviction.  On the Democratic side, we had a candidate who spoke in a nuanced way (never mind that I didn’t understand the things he was talking about half the time, what mattered to me was that he sounded oh so nuanced!), but who was routinely criticized for being a “flip-flopper”, which sounded an awful lot to me like a disparaging term for “being able to see two sides of an issue”.  On the Republican side, we had a candidate who seemed to gain appeal by stating everything in as simplistic a way as possible, whose definition of “strong leader” revolved around not questioning the course we were on, and whose overriding concern in the face of criticism was apparently “not sending mixed messages to our troops”.  As someone who wasn’t exactly terribly knowledgeable about many of the object-level issues being discussed, it seemed to me like the debates were really a contest between a philosophy of questioning for the purpose of self correction and a philosophy of maintaining strong convictions for the sake of having strong convictions.

There was a particular moment in the second debate which encapsulated this for me, in which Senator Kerry was explaining why he voted against some pro-life-based laws not because he disagreed with the general stances motivating them but because they lacked certain provisions which he thought were necessary.  He ended by saying, “It’s never quite as simple as the president wants you to believe.”  President Bush’s response says it all:

It’s pretty simple when they say, “Are you for a ban on partial birth abortion?  Yes or no?”  And he was given a chance to vote.  And he voted no.  And that’s just the way it is, that’s the vote.  It came right up, it’s clear for everybody to see.  And as I said, you can run but you can’t hide.  It’s the reality.


This is why any account of my personal journey towards today’s flavor of online rationalism is incomplete without discussing how I was shaped by the 2004 election.

When the results of the contest came in, I was bitterly disappointed along with many others.  But I felt like one of the only ones who was disappointed not only because Bush won, but because Kerry, who had felt to me like a voice of genuine reason, lost.  And after that, I guess I sort of made peace with the fact that I felt unable to hold terribly strong or specific convictions on many political issues that weren’t social.  I had a firm feeling about what mattered the most: I was in favor of politicians who operated on open-mindedness, skepticism, and above all, humility and the ability to self-correct.  And the Democratic party seemed to take stances that better encapsulated that attitude and to house more politicians who had that quality.

For the record, I’ve since grown less naïve about Kerry: while I still believe that he was generally sincere and held consistent beliefs, it’s clear to me that he was shrewd about pandering to different groups of people.  However, I hold that Bush, his administration, and the election of 2004 marked the pinnacle of blatant anti-intellectualism in the US during my lifetime.  (Obviously we’ve just started down a new path and I’m not sure what I’ll be calling this trend in another 12 years, but as Trumpism doesn’t seem to have much of a direct relationship to intellectualism, or intelligence, or any form of coherent thought for that matter, it’s hard for me to brand it as “anti-intellectualism”.)

III. A collection of my (non-)convictions

I guess the update I’ll start with is to say that I no longer see the Left or the Democratic party as a paragon of rationalistic ideology in today’s American political scene.  In fact, I’m constantly frustrated by the extent to which left-wing rhetoric seems to be based on unreasoned emotions and aversion to self-correction.  To fully explain this point of view would require another, much longer post, but if you’re reading this, then there’s a good chance you’re not far away (in some measure of internet-distance) from blogs which delve into the flaws of today’s liberal discourse all the time.

I still feel woefully un-savvy about political goings-on and all sides of complex issues, but I do follow a particular set of heuristics which lead me to certain (still fairly left-wing) political leanings.  Below is my attempt to summarize a few of them.

First off, I knew I eventually had to link to my post on free will / determinism, with my contention that leaning towards free-will explanations versus deterministic ones corresponds in a rough way to conservative versus liberal attitudes.  I suppose it’s important to mention here that my instinct from the moment I was first exposed to the free will debate was towards determinism; this feels related to my tendencies both towards “scientism” and towards empathy.  I soon realized that the sort of determinism I favored was compatibilism, which doesn’t really contradict anybody’s concrete everyday intuition about either free will or determinism.  And yet, in concrete, everyday situations, I do feel like I lean more towards deterministic interpretations of behavior than the average person does.  This has led me to the left-wing view on many things.

Meanwhile, I have also always been somewhat of a utilitarian by instinct and have trouble interpreting ethical dilemmas using any other language.  Therefore, I take issue on a fundamental philosophical level with axiomatic-looking notions like “fairness”, “desert”, and “natural rights”, even while they are useful terms on a practical level.

I therefore strongly believe that punishment should only be used for the purpose of deterrence, not retribution.  When I was younger, I favored the death penalty for reasons of practicality; since then I’ve turned against it mainly because it seems barbaric, in practice not as humane as it should be in theory, prone to error, and rooted in a desire for retribution.  I am in principal willing for certain drugs to be “illegal” in some sense of the term because it’s easy to demonstrate that they do great harm, but I’m completely opposed to harsh prison sentences for drug offenders as this seems absolutely counterproductive to minimizing harm.  I’ve grown quite cynical about the prison system in general and would much prefer some form of mandatory rehabilitation for certain types of “crimes”.

Foreign affairs is my area of greatest ignorance (I’m truly an instance of the American stereotype of knowing a lot about my own country but little about what’s going on in the rest of the world — even recently moving abroad has not improved this much), but I have some heuristic convictions nonetheless.  I believe that the US should strive to do as much good as possible for the world (and “the world” includes America), but that we are far better able to judge and manage and micromanage what goes on within our own borders than what happens in societies far away with very foreign cultures and political situations.  It follows that interfering in conflicts taking place within other countries holds the risk of creating an even bigger mess and possible permanent occupation situation and should be approached with great caution even when there are potential major benefits to global well-being.  Probably the best type of scenario for the US to get involved in is one where there is some united oppressed group far away without the necessary resources to overthrow their oppressors.  I’m not on principle against the US throwing its considerable strength towards solving what we conscientiously consider to be great atrocities abroad.  But I don’t like the idea of America acting as the world’s police force simply because of our great military power, for the same reason that I dislike unfettered monarchy or dictatorship (what happens when the well-intentioned party with overwhelming power is wrong?)

I’m inclined to oppose any ruthless and inhumane actions partaken in the context of war or for reasons of “keeping America safe”, even though dispassionate utilitarianism does compel me to concede in theory that despicable actions towards a few which seem guaranteed to prevent the deaths of many may be justified.  Conveniently, however, harsh measures such as torture have apparently been shown to not be particularly effective.  Moreover, it is of extreme importance to consider how the rest of the world may react to ruthless practices on the part of the American military and how this may serve to further escalate conflict rather than make the world safer.  (In general, emphasis on Theory of Mind and considering how one’s actions will affect other parties’ perceptions is a big part of what guides me both in political attitudes and elsewhere.)

I still hold the process of and institution of science in highest regard when it comes to determining empirical facts, and therefore assume by default the truth of what the scientific community says regarding issues like evolution and climate change (although I’ve become a little cynical about social sciences as of late).

I continue to vehemently reject social attitudes based on conservative religious convictions such as opposition to same-sex marriage, stem-cell research, or euthanasia.  However, one “meta” level up, I don’t have a problem with the fact that some politicians are trying to legislate based on their religious convictions: everyone ought to base their stances on personal moral convictions, and these are based on religious belief for many individuals.  As long as politicians aren’t trying to justify their religiously motivated proposals with claims like “America is a Christian nation”, I don’t consider their proposals to violate the First Amendment or “separation of Church and State”.

In the arena of fiscal policy, I’m still looking to maximize well-being for the greatest number of people.  It’s clear to me that this doesn’t scale linearly with wealth, and so at least on naïve principle I’m in favor of creaming a bit off the top of the highest incomes to give to the poor or to programs which benefit the poor.  However, in the actual world it’s very plausible to me that policies which aim to bring this about may weaken the economy so that everyone is worse off.  My lack of expertise in macroeconomics is hurting me here: I’m not sure to what extent pumping money into the working and lower-middle class (who are likely to spend it all) would benefit the economy versus to what extent this is accomplished through benefits for big businesses.  My inclination for the time being is to make sure that all full-time workers make enough to live on practically (exactly how much is a nontrivial question, of course), although the alternative idea of a universal basic income interests me very much.  While I can see the attraction to libertarianism as an abstract theory and could even see myself taking libertarian stances on many issues, I utterly reject two of the arguments I most often hear for it: “poor people would become richer if they just worked harder” and its neighboring attitudes (see my deterministic inclinations above); and “Taxation is theft!” and similar statements which seem to assume some primal notion of ownership rather than regarding it as an abstract phenomenon contingent an existing State.

There are many more hotly-debated areas of policy on which I have at least some tentative opinion, but these were the main ones I thought to put down in writing at this moment.  Some of them could of course change tomorrow.

Oh, and yes, our mechanisms for self-correction are still of utmost importance in my eyes.  This of course is encoded in our First Amendment protecting free speech, and although I believe that both the Left and the Right have invoked it inappropriately at times, I take very seriously any genuine offense to the spirit of it.  Let’s move towards a norm of listening to each other and compromise or when necessary going with majority opinion in order to work together in an effort to make progress with our policies… but always with the open-minded awareness that we could be wrong.

A Principle of Empathy

[Content note: Donald Trump and the election (not the main focus).  Enough said.]

The Principle of Charity is an idea that seems to be touted fairly regularly by members of the rationalist community. Scott Alexander is especially well known as an advocate of it and even devoted the first post on his now very popular blog Slate Star Codex to declaring the Principle of Charity as the ethos of the new blog.  It more or less says that in examining another person’s viewpoint, one should strive for the strongest, most reasonable possible interpretation of their argument, in particular not assuming that they’re being stupid or completely irrational.  I’ve seen related terms used a little more loosely (“I don’t think you’re interpreting her words very charitably”) so as not to apply strictly to intellectual debating scenarios.  The general idea is closely related to the practice of steelmanning.

When I first discovered the internet rationalist community and looked up what the Principle of Charity was, I took it as further confirmation that I had found “my people”.  I recognized it as not only an argumentative tactic I fervently believed in, but as somehow a core part of who I was and a personal characteristic that guided me in my interactions with people.  Today I want to explore a little more closely how the principle speaks to me so strongly, as well as how I might revise it to something which reflects my temperament even better.  In doing so, I may in fact be treating a rather broad strawman of the Principle of Charity rather than the bare essence of the thing itself, but I feel somewhat justified in doing this as our principles often become a little broad and strawman-like when we actually put them into practice.

I. Understanding my charitable instincts

And you overlook Dumbledore’s greatest weakness: he has to believe the best of people.

– Severus Snape, in Harry Potter and the Half-Blood Prince, by J. K. Rowling

Those who know me in real life (which presumably isn’t anyone who is reading this, although who knows) find me a bit frustrating from time to time because of my way of argumentatively defending others who have committed offenses.  I say things like “they were probably just trying to Y” or “I’m sure they didn’t mean anything as bad as Z” or “I agree that doing X was wrong, but it’s really difficult for them because of V and W”.  I get told on a regular basis that I have a strong tendency, to a fault, towards “giving everyone the benefit of the doubt” or “seeing / assuming the best in everyone”.  This is perceived as extreme enough to be qualify as a fault because it leads to me being easily manipulated / pushed around… as well as for the oftentimes more immediate and obvious reason that it causes me to argue with my friends on behalf of third parties who have committed offenses and clearly don’t deserve to be defended.

I’m not sure exactly what to say about this component of my personality, except that by and large I haven’t tried to change it because I continue to believe that generous assessments of other people’s behavior have been proven correct on average throughout my life’s experience interacting with humans.  (To be fair, maybe this belief depends entirely on further assessments of other people’s behavior which continue to be too generous.)  Sometimes I overestimate the good intentions behind people’s actions, and sometimes I am too credulous of narratives being related to me, and that has led me into some toxic situations.  I really don’t know exactly how best to calibrate my good-intent-ometer in such a way that I avoid being taken advantage of while continuing to model reasonably correct views of the world.  To explore that in writing would require a whole other blog entry falling into more of a “self-therapy” category.

But clearly, the fact that I tend to assume the best of people, and that I believe that such assumptions on average turn out to be accurate while holding that villainizing others tends to be destructive both for good debate and personal conflict-resolution, has led me to find the Principal of Charity a pretty attractive idea.

However, when listening to feedback given to me over the course of my life on this personal feature of mine, perhaps what strikes me the most is the nature of those which take the form of compliments.  People tell me that I’m “nice”.  Part of this I’m sure alludes to my tendency towards politeness to other people’s faces and even behind their backs, but a lot of it seems to come from an impression that I “see the best in everyone”, which sounds roughly equivalent to “believing everyone is good” or “holding unusually high opinions of everyone”.

I’m really intrigued by this because I think it’s a fundamentally mistaken impression of the way I am.  I don’t hold the other human beings in my life in particularly high esteem.  I like a lot of those around me a lot of the time, and yet there are some days and even whole weeks when I feel incessantly irritated with everyone and with humankind in general.  (Granted, I keep most of these thoughts to myself, as I’m very confrontation-averse and go out of my way to avoid any kind of drama.  Maybe that qualifies as “niceness” or maybe it’s just cowardice; you tell me.)  As far as I know, these occasional misanthropic moods are nothing abnormal, and I wouldn’t say that I hold the other human beings in my life in particularly low esteem either.  Taking the mean over my opinions of everyone I interact with, I estimate that the height of my opinion is not much greater or less than than that of most anybody else.  What’s different is the variance: where most people perhaps think very well of some and very badly of others, my opinions of almost everyone fall somewhere in the middle.  I don’t mean that I go around saying, “Meh, I feel the same so-so feeling towards everyone”; I feel very fond of a lot of people close to me but in my more pensive moments view them as creatures shaped by genetics and environment which happens to have put them in a position of positive impact on my life.  I tend to concoct excuses and/or unpleasant circumstances for the bad things that unsavory people do, but I also tend to concoct selfish motives and/or fortunate circumstances behind the good things that highly respectable people do.

Why do I process personal events this way?  Maybe I just have a strong tendency towards deterministic explanations for everything.  Maybe my reason for leaning towards deterministic explanations is that I badly want to understand what makes other people tick, and assuming libertarian free will amounts to throwing up my hands in the face of the mystery of why others act as they do.  Maybe this is related to the major importance I place on Theory of Mind — I wanted to attach a link to the phrase “Theory of Mind” there, but I haven’t written that post yet; for now, this article provides an introduction.

But I’ve come to realize that although my habit of interpreting the motives behind a lot of questionable actions charitably might be described as applying a Principle of, well, Charity, that doesn’t work as a unified explanation my full mindset in dealing with other people.  I’ve become aware that my first priority is not necessarily to be charitable or sympathetic, or to assume the best, or to give everyone the benefit of the doubt all of the time; it’s to understand.  This makes some objective logical sense: after all, if one’s ultimate goal is to know the truth, then full understanding rather than bias towards believing positive things seems like the way to go.  And so even though the celebrated Principal of Charity is obviously something I’m generally in favor of, it may not most closely reflect my personal creed.

II. The best of people and the worst of people

One of the difficulties in applying the Principle of Charity all the time — and again, this isn’t exactly a rebuttal against the original notion so much as a doubt I’m raising about the general mindset that comes with it — is that it can sometimes become tricky in practice it to fully apply it to multiple sides of an issue at one time.

Suppose you are a relationship councilor and Alex and Beth are in your office explaining each of their sides of a conflict which threatens to destroy their relationship.  Alex is very angry with Beth for having cheated on him.  Beth explains that to some extent they had always had an open relationship.  Alex disputes Beth’s interpretation of exactly what kind of “openness” they had actually agreed to in the relationship.  Beth disputes Alex’s interpretation of this as well as to what degree her behavior constituted “cheating”.  There is some disagreement on concrete physical events and exactly what was said or done when, but more of the disagreement is over interpretations of things that had been “understood” between Alex and Beth.  Your job here, inasmuch as it involves directly resolving the conflict rather than just facilitating better communication between your clients, is tricky.  Applying charity by assuming the most reasonable possible motives behind each person’s point of view seems like a good idea and may be sufficient to fully resolve the problem.  But depending on the circumstances, it may ultimately lead to contradictions: maybe the more charitable you are in interpreting Alex’s words, the more uncharitable you are forced to be towards Beth, and vice versa.  Maybe adopting a model of one (or even both) of them as just a manipulative jerk ultimately fits the evidence better than being as charitable as you can to both of them just up to the point of reaching a complete impasse.

That illustration was kind of vague and maybe not even that realistic, so let’s move from hypothetical personal situations to actual political ones.  For as long as I’ve been following politics, I’ve forcibly avoided demonizing politicians.  Yes, they generally don’t come across as the best of people, but maybe one really has to act with some level of dishonesty in order to make a difference through the political process.  If a politician stood on a platform I strongly disagreed with, I assumed they just held different values at different priorities from me or interpreted facts differently from the way I did (or had access to different sets of facts), rather than assuming that their stance was based on malice.  I figured that if only everyone treated these figures as charitably as I did, then our political discourse would become far more productive.

Then along came a certain non-politician political candidate whose apparent moral bankruptcy evaded all of my early attempts to apply charity.  That man is now the president-elect of the United States.

(I’d like to mention here that I had the beginning of a draft of this essay sitting in my WordPress account, bearing the current title, before I even started writing my recent post on the rationality of voting and therefore well before the election.  I was already planning to bring up Donald Trump.  Then, with the election rapidly approaching, I decided to hurry up and write the essay about voting in time to publish it before the big day.  I figured I would finish this post next and apologize for bringing up Donald Trump, since obviously everyone would be sick of hearing about him following Hillary Clinton’s victory.  But the election didn’t quite go as I foresaw, and we’re all going to be constantly hearing about Donald Trump for a long time to come whether we want to or not, so what the heck.)

Anyway, as the long campaign season unfolded, I found myself less and less able to excuse Mr. Trump’s outlandish remarks, even though my initial instinct had always been to treat him with just as much charity as I had always given to every other candidate.  I had to ask myself, if I had no particular bias against him, why did I appear to be treating him differently from almost everyone else?  And then I realized that it wasn’t really charity that I had been employing to evaluate other political candidates: it was a determination to understand them as completely as possible.  And with Mr. Trump, I had been embarking on the same quest: I wanted to see the inner workings of his mind and exactly what made him speak and act in the ways that he did.  And the model that began to form was that of an ignoramus who held no serious convictions on anything except for his own desire to seek glory through general bullying behavior while feeling vindicated by every success along the way, however absurd.  Now under this model, certain uncharitable interpretations became inescapable for me.  When he made a quip about what those second-amendment people might do if Clinton became president, was he really just joking about how that crowd is just really strong and determined when it comes to fighting for their second-amendment rights?  Could he really have been innocently confused due to a bad earpiece when asked how he felt about David Duke’s support of him?  Did he really mean [insert a dozen other things here]?  Come on.

If I continued to apply charity by accepting every single one of Trump’s explanations for every reprehensible thing he said, it would somehow feel like a violation of common sense.  And eventually it might lead to much dicier issues.  I’m not saying that charity towards Donald Trump necessarily directly implies anti-charity elsewhere, but it does kind of seem to go hand-in-hand with uncharitable interpretations of his detractors’ criticisms of his words and actions.  Scott Alexander made some good points in his recent Slate Star Codex post following Trump’s victory, but a lot of it struck me as an effort to bend over backwards to take a charitable possible attitude towards our president-elect which ironically resulted in rather uncharitable interpretations of some major anti-Trump talking points.

Note that today I don’t care to actually analyze and defend my beliefs on any of these features of our recent election and its aftermath — to do so would require another post of its own, longer than this one.  The reader is free to disagree with me completely, but I ask them to nonetheless accept my reality regarding Trump as a hypothetical situation which illustrates something about the limits of the Principle of Charity.  A lot of what I took for an instinct to be charitable was actually an instinct to be empathetic, and while a lot of the time that results in positive assessments of people, or at least excuse-making, sometimes it results in my realization that their motivations are actually reprehensible and that they don’t deserve excuses.  Charity is always beneficial to the object (while potentially to the detriment of other parties involved in the same debate), but empathy can cut both ways by exposing the best of people and the worst of people.

III. The risks and rewards of empathizing

I propose that we reform our Principle of Charity into a Principle of Empathy.  This Principle of Empathy is not a repudiation of the old Principle of Charity, but rather an evolution of it, one which will lead us closer both to objective truth and to the most understanding possible society.  And given recent events which threaten to polarize our discourse even further, I believe that the goal of striving to be empathetic will be, if anything, more difficult but also more crucial than ever going forward.

I don’t claim that being highly empathetic on a personal level is not without its risks.  I have reason to imagine that I operate on incredibly high levels of empathy, perhaps abnormally intense levels.  I’ve noticed that this is often not only to my detriment but to the detriment of those around me.  For instance, if the suffering of someone close to me is too much for me to handle so that I feel forced to shut them out, then I’m really not being as good a companion to them as if I provided support while managing to remain stronger and less affected by their adversity than they are.

I also see risks in publicly defending others through empathetic reasoning, which is one reason why thus far I’ve generally stuck to empathizing with them in my own mind or behind their backs.  It can become very delicate to stand up for someone on the basis of what you perceive to go on in their minds, both their strengths and their weaknesses, without coming across as a totally condescending prick.  Compare an attack of “What Bob did is completely inexcusable because of A, B, and C” to a defense that sounds like “What Bob did was wrong, but I can understand how he did it given that he’s been through X and Y and this appears to have resulted in him lacking the emotional strength to face up to Z.  Even though the perfectly rational decision would have been W, it was evidently really hard under the circumstances for him to be rational and so he made the wrong choice.  Please show him some forgiveness.”  I imagine that the Bob here might actually feel more angry and hurt by the defense than by the attack.  (Or if one is using the flip side of empathy to instead condemn Bob for sinister motives, he would probably be angered more by this type of condemnation than by an argument based in the external fact of his action having been wrong: “How dare you assume that you know me and the way I think and feel!”)

And yet, I see these both of the issues described above as ones of execution only.  For the former, I have to learn how to feel empathy in the most productive way possible; for the latter, one has to gain the skill of producing diction that conveys a tone of genuine solidarity rather than condescension.  My viewpoint in theory remains unyielding: it is the duty of each of us to go forth and empathize!

Speculations of my inner gadfly

[Content note: This is something I’ve been thinking about which feels somewhat clearer in my mind than it comes out in writing.  However, I’m already having doubts about how the connection to superweapons works.  Mentions of several sensitive issues for examples, included in tags.]

It is a common criticism from those who have known me for long enough that I’m too gullible.  Sometimes this is meant in the basic sense of believing false things (especially when I was younger), but also sometimes in the sense that I come across as much too immediately accepting of whatever broad narrative is pitched to me in defense of a particular view.  Enough independent people from different parts of my life have expressed concern about this that it’s only logical for me to conclude that the criticism is probably valid on some level.  At this point in my life, it’s more a matter of in which sense is it valid, what underlies this tendency, and which aspects of it are helping me as opposed to hurting me.

There’s more than one issue at play here, but here I want to focus on one particular type of fallacy which I consider to be a major problem with a lot of the discourse I see, and which I’m trying to guard against when I react to claims in a way that makes me look too credulous.  This problem in the world of discourse can be summed up by saying that we’re not welcoming enough to gadflies.

I. Socrates the Gadfly

I am not particularly knowledgeable with regard to ancient Greek philosophers, but I am familiar with Socrates’ characterization of himself as a “gadfly of the Athenian people”.  What he meant, as I understand it, is that his intellectual function in his society was to articulate skepticism and raise nagging doubts in the face of commonly-held assumptions.  In other words, he aimed to be what is more commonly called a “devil’s advocate”.  According to him, gadflies are understood to create discomfort and to generally be annoying, but they should be welcomed.  Apparently in trying to defend himself from the death penalty, he claimed, perhaps arrogantly, that he was the only gadfly in the area, and that they would be unwise to get rid of him as gadflies are essential to the health of society.

This assertion has been made many times and articulated in many ways since Socrates.  It encapsulates a general idea that is seen most prominently in the philosophy of science, as well as within the deeply-held values at the heart of modern democracies, skeptic/rationalist culture, and academic culture in general.  In any intellectual pursuit, thinking critically and challenging assumptions is key.  I don’t want to write about this very broad notion which has been discussed constantly for centuries.  When I say, “We just don’t welcome enough gadflies”, I’m not trying to proclaim a vague platitude like “We don’t think critically enough!”  In particular, the use of “gadfly” is not meant as a metaphor for challenging authority, or the exercise of skepticism within the scientific process.  (Indeed, I don’t see any sense in claiming things like “We should be more skeptical when doing science”.  The scientific mindset, as Carl Sagan put it, consists of “a marriage of skepticism and wonder”, and in fact my comment about gadflies could be construed equally well to mean that we need more wonder (i.e. open-mindedness) when doing science.  Skepticism and wonder are arguably two sides of the same coin.)

The gadfly behavior I’m advocating today is a more specific thing, which I find easier to describe in the negative: when considering a particular decision or situation, don’t automatically dismiss any of the relevant possibilities that come to mind, even (especially!) if they make you feel uncomfortable.

Let me clarify what I mean by “relevant possibilities” above by use of an example from an earlier post.  Suppose that you arrive at a colleague’s office at an agreed-upon time for a meeting with them to prepare for an upcoming deadline, but they never show up.  Now let’s say that one of your major pet peeves with the world is the way most people around you seem to be disorganized, and that this has really been adding to your stress lately as you rely on a lot of other people.  To make matters worse, although you know you can probably reschedule for late tomorrow afternoon (both you and that colleague often stay after normal hours), tomorrow is your kid’s birthday and definitely not a day you want to come home late.  So naturally, your immediate reaction is to feel really angry.

There are many possible causes for your colleague’s absence, a few of which were discussed in the other post: they might have decided not to bother; they might have simply forgotten; they might have a drug problem (entirely unknown to you) which indirectly resulted in not being able to make it; or they might have gotten into some kind of accident on the way to their office.  Chances are that the first two possibilities above are the most obvious explanations and are the first to leap into your mind — unsurprisingly, these ideas do nothing to abate your anger.  It is unlikely, especially given the narrative you’ve developed about everyone else being disorganized, that either of the last two possibilities will occur to you quickly if at all.  And yet those explanations, while perhaps not particularly likely, are still perfectly plausible.  Maybe your colleague has always had the appearance of being totally together, but is actually struggling with some sort of addiction, or perhaps suffering from a mental illness which has not been apparent to you.  And people do get into serious accidents and have emergencies from time to time.  And so, before acting on your newfound resentment towards your colleague, you should at least consider these possibilities — these are the “relevant possibilities” I referred to above.  I’m not saying they should be deemed as likely, but that they should occur to you, and be objectively considered.

This is not a matter of considering every possibility under the sun and weighing them all equally.  Under most circumstances it seems to be a much more common occurrence for people to be careless or forgetful than for them to have some much more serious reason to not show up for something.  However, in the long run, I believe it pays off to at least allow them to enter your consciousness.

(By the way, let’s say that your colleague did fail to meet you out of forgetfulness, caused in part by the fact that they never saw the meeting as particularly important.  They get that nobody likes waiting around for someone who never shows up, but sincerely don’t understand why you would be this upset about it.  After all, you can just both stay late tomorrow, as you often do, and deal with everything then in time not to miss any deadlines.  It just doesn’t occur to them that there might be a particular reason why you don’t want to be at work late tomorrow.  Maybe they, like you, should make more of a habit of considering more possibilities, especially those which lead to conclusions they don’t want to believe.)

These annoying ideas that we should try our best to come up with, particularly the ones which threaten the narratives we’re comfortable believing, are what I call “gadfly speculations”.  They are not fun to have around, but it’s bad for our intellectual health not to let a few of them swarm our conscious minds and nip at our deliberations on a regular basis.

I want to be clear before going any further that when I say, “Be welcoming to gadflies”, all I’m talking about here is the skill of knowing how to let these speculations fly into one’s head in the first place, NOT how to weigh them once they’re present!  Gadfly speculations are what should happen during a mini brainstorming session.  They are funny-looking blobs to be thrown at a wall regardless of whether in the moment they seem likely to stick.  They are ideas which may seem quite improbable, but which should occupy a spot on one’s mental whiteboard.  Later on, of course, they need to be evaluated on their merits.  Pretty much everyone understands on principle the idea of coming up with a bunch of ideas and then evaluating them to choose the best (or most probable) one, but I have a feeling that a lot of us don’t pay enough attention to gathering a sufficiently varied collection of ideas in the first place.

That is what I’m trying to stress here.  In order to weigh possibilities to arrive at the most rational conclusion, we need to reach the first step of being able to see a healthy variety of possibilities on the table in front of us.  Why do we so often fail at this?  Our intellects tend to be lazy, and we naturally want the first step of any decision-making process to be easier.  One obvious way to make it easier is to give ourselves fewer things to choose from.

Now there’s nothing deep in arguing that we should be careful to entertain enough gadfly speculations.  It’s basically a variant of guarding against “lack of imagination” and more or less standard Biases 101 stuff.  I just want to point attention to how this very unsurprising human tendency plays into some more interesting rhetorical trends.  Or at least, in the likely event that these connections seem similarly obvious, I’d at least like to get this point of view down in writing so that I can easily refer to it later.

(I’ve always enjoyed the gadfly metaphor.  I remember distinctly that back when I was in college and for the first time very interested in starting a blog, I kept trying to think of a name which referred to gadflies.  I wouldn’t be surprised if the word “speculation” didn’t show up in some of these names too, since I’ve always seen myself just suggesting things in blog posts rather than trying to meticulously argue anything.  But at the time, the only name I could come up with that I was reasonably happy with was “Hawks and Handsaws”, and obviously I managed no better many years later when it came to naming this blog.)

II. The building of superweapons

In several posts, most notably these two (see also this), Scott Alexander (who runs Slate Star Codex) expounds upon a rhetorical phenomenon which he calls “superweapons”.  Here is the essential passage from the first linked post:

Suppose you were a Jew in old-timey Eastern Europe. The big news story is about a Jewish man who killed a Christian child. As far as you can tell the story is true. It’s just disappointing that everyone who tells it is describing it as “A Jew killed a Christian kid today”. You don’t want to make a big deal over this, because no one is saying anything objectionable like “And so all Jews are evil”. Besides you’d hate to inject identity politics into this obvious tragedy. It just sort of makes you uncomfortable.

The next day you hear that the local priest is giving a sermon on how the Jews killed Christ. This statement seems historically plausible, and it’s part of the Christian religion, and no one is implying it says anything about the Jews today. You’d hate to be the guy who barges in and tries to tell the Christians what Biblical facts they can and can’t include in their sermons just because they offend you. It would make you an annoying busybody. So again you just get uncomfortable.

The next day you hear people complain about the greedy Jewish bankers who are ruining the world economy. And really a disproportionate number of bankers are Jewish, and bankers really do seem to be the source of a lot of economic problems. It seems kind of pedantic to interrupt every conversation with “But also some bankers are Christian, or Muslim, and even though a disproportionate number of bankers are Jewish that doesn’t mean the Jewish bankers are disproportionately active in ruining the world economy compared to their numbers.” So again you stay uncomfortable.

Then the next day you hear people complain about Israeli atrocities in Palestine, which is of course terribly anachronistic if you’re in old-timey Eastern Europe but let’s roll with it. You understand that the Israelis really do commit some terrible acts. On the other hand, when people start talking about “Jewish atrocities” and “the need to protect Gentiles from Jewish rapacity” and “laws to stop all this horrible stuff the Jews are doing”, you just feel worried, even though you personally are not doing any horrible stuff and maybe they even have good reasons for phrasing it that way.

Then the next day you get in a business dispute with your neighbor. If it’s typical of the sort of thing that happened in this era, you loaned him some money and he doesn’t feel like paying you back. He tells you you’d better just give up, admit he is in the right, and apologize to him – because if the conflict escalated everyone would take his side because he is a Christian and you are a Jew. And everyone knows that Jews victimize Christians and are basically child-murdering Christ-killing economy-ruining atrocity-committing scum.

He has a point – not about the scum, but about that everyone would take his side. Like the Russians in the missile defense example above, you have allowed your opponents to build a superweapon. Only this time it is a conceptual superweapon rather than a physical one. The superweapon is the memeplex in which Jews are always in the wrong. It’s a set of pattern-matching templates, cliches, and applause lights.

The posts linked to above mainly focus on certain trends in the feminist movement, but Alexander uses a number of other examples, and I believe that the concept of “superweapon” can be applied to argumentative tactics regarding a wide variety of issues.  When I first read about superweapons from him, I had mixed feelings.  On the one hand, I was thrilled that he managed to articulate brilliantly a major issue I’d had with a lot of discourse on a lot of topics.  Before reading his essays, the only ways I’d come up with for referring to it required clumsy uses of the word “dogma” — superweapons are, after all, a means of discouraging critical questioning.  On the other hand, I was kind of dissatisfied with using what I saw as a concept handle for a complex rhetorical behavior along with intuitive appeals to its potential to be dangerous.  Maybe it’s the mathematician in me, but I would prefer to break apart these ideas until they are decomposed into atoms in the world of logical fallacies.  Since then, I’ve seen the great effect of approaches of rationalists like Alexander as well as Eliezer Yudkowsky, who has an extremely analytical mind and yet manages to convey many of his messages very clearly using invented terminology to stand in for complex ideas.  Plus, I’ve realized on attempting to decompose these concept handles into more basic parts that it’s really hard and I’m not able to get very far.  So I’m content to live with them for now.

Still, I think I can begin the process of disassembling superweapons by describing them as being made of gadfly repellents.

I should say, each superweapon made of a particular cocktail of repellents which wards off large classes of gadfly speculations (while still allowing a few which are consistent with the narrative the superweapon’s engineer is trying to push).  Think about it: a superweapon’s real source of power is really just its ability to shut down certain lines of argumentation.

For instance, take the example in the quoted passage above about the Jew in old-timey eastern Europe.  The situation is presented as a culture dominated by anti-Semitism gradually constructing a memoplex whereby Jews are always viewed as being at the root of various societal ills: child-killing, bad economy, etc.  But the flip side of this positive reinforcement (which is not explicitly mentioned above but is readily apparent in many real-life examples of superweapons) is an intolerance towards any idea that poses a threat to this narrative.  And in fact, no matter how well that eastern European society manages to reinforce those negative stereotypes about Jews, its assembled superweapon will be seriously lacking in power as long as any skeptical gadflies are buzzing around.  When the main character of the story is accused of trying to steal money from his Christian neighbor, a spectator might open their mind to the gadfly speculation “Well, I know there’s a pattern of Jews being greedy, but I suppose it might be possible that this particular Jew was owed a debt…”  The superweapon has to shut this down immediately.  In fact, in examples like this one, the superweapon has effectively shut down the thought before it’s even properly formed, by hammering an anti-Semitic narrative into everyone’s heads so hard that such contrary notions don’t occur to anybody.  In the unlikely event that someone forms the dangerous thought anyway, I imagine that in the presence of a sufficiently strong superweapon, it would be immediately met with, “Come on, when have you ever heard of a Jew being willing to help one of us Christians?  Don’t they want to kill our children?”

There are many ways to view the superweapon concept, but I hold that when viewed from one particular angle, superweapons are just anti-gadfly machines.  They suppress most gadfly speculations from forming, or they immediately quash the ones that do form.  I’ve been trying to avoid alluding to real-life modern controversial topics, but in case I need to be convincing about the quashing aspect, consider the following commonly-expressed “arguments” used to immediately kill gadfly thoughts (more often implied rather than directly said out loud): “More guns = more violence, so how can an open-carry law possibly make anyone safer?”, “More sex education = more sex, so how can the availability of birth control possibly reduce unwanted pregnancy rates?”, “Drugs cause harm, so how could legalizing them possibly do any good?”, “How could anyone possibly lie about being abused?”, etc.

III. Gadflies and partial narratives

A couple of posts ago, I explored the question of how to make ethical judgments of what I called “multivariate situations” — that is, scenarios where something happens as the effect of decisions made by two or more independent agents.  I suggested (in that post and more vaguely elsewhere) that if Mr. X and Ms. W each act on independent decisions which jointly resulted in some disaster, then oftentimes, Mr. X’s first instinct will be to put all the blame on Ms. W — after all, if she had made a difference choice, disaster would have been averted!  (Of course, Ms. W is likely to similarly blame Mr. X; the contradiction in these symmetric reactions is by itself an argument against this kneejerk behavior.)  I claim now that a key part of the subconscious strategy Mr. X uses to leap to an assumption of Ms. W’s guilt is by quickly shutting down the part of his mind that starts to consider the idea that he could have done something differently.  The most basic shape this takes is the blanket subconscious assumption that other people always have free will while his own actions in this case were determined.

This looks to me as though Mr. X is adept at warding off certain gadfly speculations.  “If she’d looked where she was going, we wouldn’t have crashed!”  “Hmm well, maybe, to be fair, if I had stuck to the speed limit, the accident might have been avoi–”  “NO!  Just focus on the fact that if that irresponsible Ms. W hadn’t been driving so inattentively, we wouldn’t have crashed!!”

A recent post on the blog Everything Studies touches on a similar idea.  There the author discusses what he calls “partial narratives”, interpretations of a situation which are very one-sided not in the sense that they’re wrong, but in the sense that they’re incredibly partial: in order to arrive at them, one “takes the derivative of a single variable, discards all other terms and dimensions, and recreates a reality based on the integration of this particular derivative.”  The main example he considers is Ayn Rand’s portrayal of capitalism in Atlas Shrugged, where Rand pushes one partial narrative about capitalism while ignoring all others.

You have “capitalism is when people can trade freely in voluntary agreements and create wealth through their own work and ingenuity” and “capitalism is when the rich can use wealth to assert power over the poor in order to extract surplus wealth from their labor”. They are both partial truths, like a cylinder is a circle from one angle and a square from another. With partial narratives we square the circle, but it remains difficult to keep them both in your head at once.

In order to push the partial truth that capitalism allows people to “create wealth through their own work and ingenuity”, as Rand did in Atlas Shrugged, it is important that no other partial truths regarding capitalism be allowed to take root in the reader’s mind.  This isn’t necessarily accomplished by explicitly dismissing such troublesome speculations as invalid; after all, that would run the risk of introducing us to those “bad” ideas in the first place (disclaimer: I haven’t read any Rand and don’t know what devices exactly she used to express her views there.  Maybe she did spend a little time in Atlas Shrugged explicitly trying to rebut the “capitalism is oppressive” narrative.  But I believe that this is avoided an awful lot of the time partial narratives are pushed.)  Possibly the best way to quash ideas that challenge the desired narrative is just to proclaim it as forcefully as possible, so loudly that it drowns out all budding skepticism.  “This is a really nice story about how capitalism can lead to great wealth and personal autonomy, but I can also imagine how some poor people might get really screwed over in this syst–”  “NO!  Capitalism does so much good by giving people the freedom to create wealth through their own work and ingenuity!!

Swatting away gadflies again.

IV. My overactive inner gadfly

Now what does this have to do with my being too willing to accept any story that’s put in front of me?

Well, some of the behaviors I preach are things that I myself don’t practice enough, and others are things that I probably take too far.  Openness to gadfly speculations is an example of the latter.

Whenever I hear a narrative, however obviously unlikely, there is a part of my mind which says, “Well it could be that way.”  This goes beyond just accepting a non-negligible possibility of the claims being presented to me; it often involves me coming up with supporting explanations on my own to challenge my instinctive response of “Well obviously that can’t be true.”  The result is often that I choose to assume the truth of what I’m told pending further deliberation.

It happens from time to time that an acquaintance, particularly one who has noticed how fun to screw around with I apparently am, tells me some obviously very unlikely personal detail about themselves as a joke.  And I oftentimes initially act like I believe it (nodding slowly and saying, “Okay…”) or at least don’t immediately dismiss what they said as an obvious joke.  On one recent such occasion, I said something like “No way, you’re just messing with me” about three times before finally politely acting as if I believed what my friend was saying… which of course turned out to be the opposite of the truth.  And I think that when I act gullible in this way, it comes across like I’m lacking in critical thinking, like I’ll accept whatever is put in front of me without considering how obviously absurd it is.  But what’s actually going on in my head in a way is almost the opposite: I realize the absurdity of the claim immediately and know right away that the person is most likely joking, but ideas creep in like ominous gadflies, providing half-formed, kinda-sorta plausible explanations for why they just might be telling the truth.  And it occurs to me that if those half-formed explanations are actually reality — however minutely low the probability seems at the moment — well then it would be totally rude of me to just dismiss them and automatically disbelieve the person, wouldn’t it?  They’re probably just screwing with me, but I’m not about to take the risk of assuming this when it might turn out that they’re serious.

I guess any epistemic behavior I’d like to see more of in the world, even something like open-mindedness, can be harmful if taken to an extreme.  I believe it was Bertrand Russell who said that one should keep one’s mind open, but not so wide open that one’s brains fall out.  And there is such a thing as having too much imagination.

My interesting capacity for useless knowledge

So… it’s been a while since I posted anything here.  I could lean towards a more libertarian-free-will outlook and chide myself for recently not having put much effort into organizing my ideas and hammering them out into blog posts.  But right now I’m more in the mood for explaining my behavior through “determinism-leaning” excuses.  Like many, I find my willpower for accomplishing certain tasks to be seriously lacking at times, and the mere desire (however strong) to do better seems insufficient for overcoming this.  There are probably many mechanisms at play behind this within the folds of my gray matter, but today I want to write about what feels to me like my main issue, which I’ve intended to make a post about fairly soon in any case.  And while I’m experiencing a bit of writer’s block with regard to the more abstract content I’ve been intending to put here in upcoming posts, it might be nicer to do a lighter, free-form, and more personal post in the meantime.

For me, the verb “to interest” (really, its passive form) holds two very distinct meanings.  I considered writing them as “interest1” and “interest2”, but their respective senses feel better conveyed to me when I call them “interest” and “Interest”.

In the first sense, I become interested in things in the way that I expect that most people become interested in things most of the time.  That is, I consider a certain topic or issue, determine that it has relevance to me, and decide to pay attention to it.  Some random examples that come to mind include economic theory, ecology, and websites which show the best apartment ads so I can move next year.

But then there’s Interest.  While I imagine that other people do have a few Interests in certain topics, I doubt whether there’s usually as much of a stark difference between their feelings of interest and Interest as there is with me.

For as long as I can remember, I’ve been subject to intense passions for certain issues and subjects which don’t seem to come from any sort of rational selection process and which sometimes generate unreasonable levels of obsession.  There is no “good” reason whatsoever for me to be Interested in these things, especially while there are so many more relevant things to be studying.  And I can’t really justify the extent to which I’ll spend time and energy on pursuing the subjects I’m Interested in, when there are so many more “objectively worthwhile” things for me to spend my time on.  But it seems like there’s just no arguing with my brain when it locks itself into these pursuits.  Current examples include linguistics, human age / body records, and American presidential political history.  (I almost said something here like “random examples”, but any set of examples I give is going to sound pretty random.)

I want to make clear that when something Interests me, it really does require plenty of time and energy for me to gain an extensive knowledge of it.  Acquisition and retention don’t become automatic for me just because the topic at hand is Interesting, and I can’t just effortlessly memorize facts about these topics.  It takes work: perhaps many hours of attentive reading and/or memorization through repeated self-drilling.  But somehow when a certain topic becomes an Interest, I suddenly find myself endowed with the willpower to put myself through this kind of work.

The main frustration resulting from this is, of course, that when it comes to apportioning my mental energy, those things which I decided upon rational reflection to be interested in are often very much at odds with those things I’m Interested in.  Here are just a few examples:

  • food / environmental ethics: there are obvious reasons to be interested in this, but any attempts at real learning have so far come to nothing; ideally, I’ll wind up in a serious relationship with someone who’s both knowledgeable about and committed to such causes and willing to lead me
  • serious cooking (related): I want to eat healthier, and I really enjoy eating good food, so why not at least experiment with trying to cook good healthy food?
  • bicycle repair: it would make a lot of sense to learn how to adjust things on my bike so that I don’t periodically have to take it into the store and pay them for labor
  • just generally understanding how my computer / the internet works
  • macroeconomics, foreign policy, world history, all the relevant background for evaluating platforms of our presidential candidates

Now there are some interesting topics, especially those which have a very concrete relevance to my life (e.g. how to find an apartment for next year), which I seem to do well enough at engaging with.  And there are some Interests of mine do have some significant amount of usefulness (e.g. understanding American presidential history, although this is clearly not the most effective area of study for, say, evaluating the realisticness of Bernie Sanders’ economic proposals).

I think the solution here, as I hinted under the first of the above examples, is to find a person or a group who I trust to be knowledgeable and reasonable about the topics in which I have interest but not Interest, and follow their lead.  It’s for this reason that organizations like GiveWell are very appealing to me.  It’s always bothered me that I don’t donate anything substantial to charity.  But I’ve always felt a sort of learned helplessness in regard to learning what particular charity organizations actually do with their money, let alone how to calculate the actual utility resulting from their causes.  For some reason, even though utilitarianism-based rationalism more or less dictates that this is the most important subject for me to understand something about, it fails to be Interesting enough.  Now here comes an organization that has done the research for me, and whose findings I have good reason to put my faith in.  All that’s left then is to make a pledge, which while obviously qualifying as a sacrifice in some sense, at least doesn’t require any intellectual effort that my mind doesn’t want to go to.

Of course, it’s not all about learning information with the intention of applying it to something important.  Sometimes I just want to enjoy a particular hobby.  And the saddest situation for me is when something that used to be an Interest somehow loses its capital-I status and gets demoted to an interest.  This happened years ago with my Interest in lucid dreaming, and I’m beginning to be afraid that this is happening to my Interest in conlanging as well as I can no longer seem to find the willpower to sit down and work on it.  In general, I’ve gradually lost Interests over the years, perhaps because of my increasing intellectual struggle with math research, as well as the fact that I spend much less time wanting to divert myself while listening to lectures I find boring than I did back when I was younger.  Anyway, I always hold out hope that my brain will continue to realign in such a way that old Interests return.

And sometimes Interests that seem to be waning will rally when I kick-start my brain by forcing it to go to a relatively minor amount of effort on their behalf.  Which is more or less what I’m doing by writing this post.  With any luck, it’ll work.