A sensational attraction

[Content note: more of a musing on several disjointed ideas relating to a particular issue than an essay with a unified argument.  For reasons that are about to become apparent, I feel slightly weird about listing all of the possibly troublesome topics appearing in the content — see tags.]

I’ve noticed a general social trend which annoys me.  I guess the best way for me to summarize that trend is to say that any speculation about a person or persons engaging in some belief or behavior which is sufficiently interesting (in a certain sense of the word, to a certain part of society) is accepted by too many as true on insufficient evidence.  Or maybe it would be less clumsy just to give some examples of what I mean.

One of my very favorite historical celebrities is Charles Dodgson, who under the penname Lewis Carroll wrote Alice in Wonderland and Alice Through the Looking-Glass (which together occupy a special place in my heart and on my bookshelf), and who also worked as a professional mathematician and an Anglican deacon while dabbling in photography.  It’s been a while since I read a biography of him or any of his non-Alice written works, but I remember always getting an overwhelming sense that he had a strikingly similar personality to mine and that if timing and geography cooperated we could be really good friends.  The type of very left-brained creativity that he seemed inclined towards seems much like mine, although of course I don’t exactly see my creative pursuits leading to any ground-breaking literature.  We would probably clash on many philosophical matters — that whole God thing, for instance — but I can imagine us engaging in really pleasant debates on these topics over drinks.

Now many still seem to appreciate Dodgson’s literary masterpiece(s) as well as his contributions to mathematical logic, but many also seem to insist on the truth of some thorny allegations surrounding his life.  To start with, I’ve heard it stated as a fact more than once that he produced some of his best work under the influence of mind-altering drugs.  For instance, I once had an English teacher who related that some Dodgson authority (presumably more expert than herself) had informed her that he was “definitely tripping” when he wrote the Alice books.  (I’m not sure what substance he was supposed to be tripping on, as LSD wasn’t invented until well after his time.)  Apparently it would have been impossible to come up with a story as nonsensical and surreal as Alice Through the Looking-Glass without being on something to supplement one’s own raw creativity and wit.  But as far as I know, this is where the “evidence” for the claim ends.

That doesn’t seem so bad — after all, the fact that someone created something marvelous under the influence of drugs doesn’t decrease their merit-worthiness in the eyes of myself or many others — but there’s a far more sinister and pervasive allegation out there that Dodgson was an active pedophile.  Supposedly his interest in young Alice Liddell, the inspiration for the Alice in the books, didn’t spring entirely from affectionate feelings as a friendly uncle figure.  This is an assertion made by a significant number of Dodgson biographers which has caused quite a controversy in the field.  Now I wouldn’t want my own fondness for Dodgson and his work to bias me against believing anything unsavory about him.  And I don’t feel the need to say goodbye to the cherished art of a person who has thought or done horrible things; for instance, I don’t believe the fact that John Lennon had a habit of beating his first wife takes away from the greatness of his music or the secular humanist values he promoted.  So I tried to look into the Dodgson allegations with an open mind.  And I really couldn’t find anything to convince me that he approached little girls with any inappropriate motives or behavior.  The only evidence cited is that Dodgson used to photograph little girls in the nude, which does indeed look pretty damning until you realize that in Victorian times children were considered thoroughly nonsexual and photography of naked children was actually pretty common during that period.  It seems to me that Dodgson was basically a gentle, somewhat shy and socially awkward (although that is disputed by some) bachelor who was friendly and relaxed around some child friends.

Yet there seem to be a lot of people whom it seems impossible to sway by invoking the context of Victorian culture and steadfastly refuse to doubt that Charles Dodgson was a pedophile.  I even argued with one person who dismissively waved away my points with “Come on, we all know that’s what most of those celibate clerics were into.”

Meanwhile, contemporary to Dodgson were two consecutive United States presidents, James Buchanan and Abraham Lincoln, both of whom are suggested by a number of scholars to have been gay.  It seems perfectly plausible to me that Buchanan, America’s only bachelor president, may have been gay, although positive evidence for it seems fairly mild and it seems hard to say with much confidence what a politician’s sexual orientation was from that time.  In the case of Lincoln, I find it somewhat less likely given that he had a fairly decent marriage and fathered several children.  There doesn’t seem to be a whit of evidence for his homosexuality outside of the fact that he shared a (probably large double) bed with several men, most famously his close friend Joshua Speed.  But again, this was not uncommon for the time (indeed I’ve discovered since moving abroad that this kind of thing is still not quite as strange in Europe as it is in the modern American culture I grew up in).  I actually once heard two historians debate this claim on the radio; the one asserting Lincoln’s homosexuality was only able to return again and again to the fact that Lincoln shared a bed with Speed.  In this way, these notions persist.

Or take the frequent allegation that Walt Disney was horribly racist despite otherwise being a kind and inspirational man.  I’ve heard his name casually brought up in conversation as an example of a WWII-era figure whose work was admirable and who seemed like a nice, normal fellow but who treated blacks and/or Jews as inferior.  I’ve looked into this claim and after some searching, all I’ve really been able to find is that Disney (along with other notable figures such as Ronald Reagan) joined a particular actor’s/producer’s group in the 1940’s which has since been shown to have harbored some anti-Semitic sentiment.  Evidence that he was a Nazi sympathizer amounts to stray rumors and seems pretty thin on the ground.  Yet people seem to immediately accept the “Disney was a racist!” narrative without actually investigating the origins of the rumor.

So what is going on here?  At first glance, these might appear to be examples of the insistence within the modern internet-based social justice movement on believing any allegation of racism, abuse, etc. in accordance with what some believe to be the best strategy for combating those evils.  But of course this breaks down when one considers the “Lincoln was gay” theory (homosexuality was generally met with societal judgment until relatively recently, but it’s certainly not considered sinful by social liberals today), or the whole “Dodgson was high” thing.  Plus, I don’t think that many of those who are trying to get everyone to realize that racism and sexism persist today are all that concerned with our recognition that some celebrities of the past held abhorrent views and engaged in abusive behavior.  After all, it’s not really so debatable that a lot of values considered horrible today were either the norm or treated with a blind eye in olden times.

No, something else is at the bottom of this, and I’m reminded of Jon Stewart’s description of how he viewed the biases of mainstream media.  He once characterized news media as catering to “sensationalism and laziness” (linked to the full interview because it’s really good, but the relevant part starts around 4:30).  Well, I suggest that maybe the same holds true for the ideas we let through our own filters: we too often have a weakness for sensationalism, for ideas that are provocative in an immediate way that doesn’t require too much thought or reflection.  That is what the above examples all have in common.  Our society as a whole is sort of obsessed with a number of sensational things including racism, drug use, and pretty much anything to do with sex, especially if it’s something taboo or that once was considered taboo.  These speculations each serve as a sort of cheap momentary distraction which entertains us each time we recall it.


When certain types of content begin to attract our sensationalist and lazy urges, I imagine a lot of it has to do with novelty.

I’m pretty sure nobody who remembers what it was like to grow up will disagree with my impression that a lot of kids, from quite young children to preteens and beyond (sometimes pretty far beyond) are kind of obsessed with joking around about anything sexual, drug-related, or violent, the more shocking the better.  Now I don’t know much about child psychology and I acknowledge there are already several very obvious causes of this — actual desire for sex or interest in drugs, for instance, or the obvious awkwardness and/or smirking from older people when they’re talking about these things — but I don’t think any account of the contributing factors is complete without considering the attraction of novelty.  In my experience, children begin to be prone to talking about these things just when they’re first learning about them.  This is probably due to the shock factor being much stronger when these concepts are newer.  In addition, the impression of understanding these things gives the child a sensation of worldliness.  As kids grow older, the fact that these subjects become less taboo gives them more boldness in bringing them up all the time, while at the same time the subjects are invoked in slightly wittier and less blunt ways as there is no longer so much need for them to signal that they know the basics.  Eventually the presence of such content in all social conversation (in my opinion) wears out its welcome somewhat — I often consider it kind of immature to go on treating everything to do with sex and drugs as all that interesting and funny (not that I’ve never been guilty of it myself) — but it’s kind of hard to stop fixating on those things when the world around you seems pretty addicted to hearing about them.  Eventually, for some individuals, certain topics become so old that they’re no longer worth invoking gratuitously or responding to appreciatively, but in many cases that stage never seem to be reached.

Society as a whole seems to go through a similar set of stages with regard to certain forms of sensationalist content as it goes from almost completely unknown and/or taboo, to the very basics just beginning to become common knowledge, to becoming fully and widely understood and needing to be discussed nonstop, to lingering on as popular subject material despite having been talked/joked to death, to finally (only in some cases) being retired as rather boring or cliché.  We are more or less at the nonstop-discussion stage, for instance, where pedophilia is concerned, which explains the insistence on the unpleasant Dodgson allegations: we as a group want to show ourselves that we’re worldly enough to understand the prevalence of such things.

I always think of a certain routine by the singer-songwriter/comedian Tom Lehrer which was recorded on one of his live albums as an intro to the song “We Will All Go Together When We Go” (it can be heard here; warning: very dark nuclear-apocalypse-related humor, though admittedly not much more macabre than Lehrer’s usual).  Lehrer is embarking on a ramble having little to do with the song to follow, something about some (probably fictional) eccentric guy he used to know, and he throws in an inconsequential joke: “I particularly remember a heartwarming novel of his about a young necrophiliac who finally achieved his boyhood ambition by becoming a coroner.”  A slightly tentative, uneasy laugh comes from the audience.  Lehrer then adds, “The rest of you can look it up when you get home”, which is followed by a much louder laugh and some applause.

Now this was recorded back in 1959, and I can tell.  It’s obvious that this had to have been recorded a long time ago, because our collective understanding of and attitudes towards unusual preferences like necrophilia has changed to the point that the joke above just wouldn’t be considered funny by that many people.  I can’t imagine a quip that serves as almost literally nothing more than an acknowledgment of the basic definition of necrophilia being considered worth putting into one’s comedy routine today.  But for the 1959 audience of Tom Lehrer (who was well known at the time for pushing the envelope), kinks like necrophilia were just beginning to enter the collective consciousness so that a few of them were able to get the joke and laugh at it, albeit a little nervously since the concept was still quite new and scandalizing.  Perhaps an audience ten years later would have laughed more fully and easily at the same joke, which might then have reached its peak freshness.  But today it sounds pretty stale.  People still bring up necrophilia in a humorous context left and right of course, but typically not as a joke whose only real element is the definition (“This guy was a necrophiliac, so he decided to become a coroner!” *BA-DUM tsss*).

Meanwhile, other subjects that I expect were sufficiently provocative and sensationalistic to have been fodder for comedy routines at one time eventually seem to have either been deemed offensive or to have worn out altogether.  For instance, I have the impression that old-fashioned comedy contained many more jokes about married men lusting after their neighbors’ wives back this was considered a sufficiently sensational thing to talk about than in modern times, now that our culture has found many fresher and more exciting topics for gratuitous entertainment.  A criticism I sometimes hear by young adults of old-fashioned comedy is that the humor of past generations feels rather cliché, and I suspect this is what causes that impression.  In my opinion, the irony here is that younger humorists seem to overuse certain topics in their content just as much, and while those topics seem fresher today, that might not be the case thirty years from now.


One positive effect of our collective preference for sensationalism is that it leads to greater awareness of people and issues that once went unrecognized.  The drastically increased visibility over the last several decades of gay people in the media is largely due to activism of course, but a lot of it has to do with our culture as a whole discovering some “new” thing, making it gradually less taboo, and finding it engaging enough to portray frequently.  This has helped a great deal to increase visibility for the LGBT community in general.  But clearly the portrayal of diverse sexual orientations has continued to evolve and is still evolving.  Certain types of “gay jokes”, especially the ones whose entire substance is “Gay people exist!”, are now rightly considered offensive, and the moral message of “Gay people are people too!” is now being conveyed in more subtle and nuanced ways.  However, we are only just beginning to reach what I’d considered to be a desired goal of commonly seeing characters who happen to be gay (or any other minority sexual orientation) but who aren’t “gay characters” per se — if we get there, only then might we be able to say that gay culture has reached full mainstream acceptance.

We’ve seen a slightly similar phenomenon with portrayals of mental illness.  Back 50 years ago, the main forms of it portrayed in fiction were very severe disorders featured for shock value (as in Hitchcock’s Psycho).  During my lifetime, however, characters with more common mental illnesses are seen increasingly frequently, which helps bring awareness to a very important issue.

The downside is that there are other issues and groups that also deserve awareness but aren’t fodder for immediate entertainment in quite the same way.  I think asexuality might be the prime example here.  I imagine that many if not most folks are aware of the ace community or at least that some people don’t experience sexual attraction, but you sure wouldn’t know it from the media we consume or even from the assumptions we make in our social interactions.  Asexuals comprise what may be one of the most invisible minority groups in our society, and even though there’s not much explicit anti-asexual prejudice rooted in our traditional culture, there doesn’t seem to be much effort to increase visibility for this group.  The reason behind this is clear: the notion of someone being asexual is interesting in an abstract way, but it isn’t sensationalistic.  Asexuality doesn’t lend itself to interesting situations or plotlines, at least not at first glance.  It’s been widely noted that sitcoms tend to adhere pretty closely to the Everybody Is Single trope where unrealistically few of the characters are in relationships.  This is for the obvious reason that more characters on the dating market means greater potential for engaging stories.  And a character who is single but who just isn’t interested in dating and/or sex seems even less likely than a character in a steady relationship to provide engaging stories of the kind we’re used to consuming.

A similar principle applies to certain kinds of more “mundane” disabilities.  It even applies, say, to high schoolers who (like past-me) aren’t obsessed with drugs, sex, reckless driving, and generally getting into trouble, and yet who (unlike past-me) also aren’t super stereotypically nerdy.  You wouldn’t get the impression that such adolescents exist either from media or from the way most conversations about teenagers go.

This doesn’t mean that it’s impossible to portray someone of one of these groups in a film, TV show, or book in a way that sells.  I believe it can be done with a little creative effort, and that such an effort should be made wherever possible.  And I hope to see our culture adopt an attitude of greater self-awareness of the assumptions that arise from our insatiable attraction to sensationalism as well as a willingness to push back against it.

My evolving views of (American) politics

(a journey in epistemic helplessness)

[Content note: throwback to my first two posts, published this month last year.  On politics but I don’t know enough politics to make a “political” post per se.  A few issues listed in tags.]

First, a “meta” note.  I’m pleased that I got some substantial ideas down in writing here last year, however imperfectly, but I feel that I went very slightly astray of what I originally envisioned for this blog.  Therefore, I’ve made a resolution to steer my writing in a direction away from posts on mostly-impersonal abstract rationality concepts and towards posts on more concrete and personal issues.

The primary purpose for me in writing essays for this blog has always boiled down to something akin to self-therapy, as I tried to make clear from the start.  I think I succeeded in this at the beginning, but eventually my focus got slightly bogged down elsewhere.  I don’t regret focusing on the ideas I tried to express here last year, since it felt necessary to give myself a framework to explain why I think in the way that I do.  However, I’m beginning to wince at how many of my previous essays read like long-winded cerebral wanderings through subtle abstract questions with so much talk of “rationalism this” and “rationalist community that”.  It was never my intention to sketch out a dingy addendum to Yudkowsky’s Sequences.

It should be understood that my “rationalisty” essays aren’t meant to be persuasive in the sense of arguing that my approach to certain questions is objectively the best one; they’re instead meant to describe the way my mind works.  What I’ve ultimately wanted to do all along is jot down in writing the feelings and perceptions that guide my current approach to getting through my life both socially and on a more epistemic level (much of which, clearly, is tied in with rationalism).  And of course, I should feel free to go through with this kind of jotting-down even if I’m afraid what I have to say comprises ideas that are poorly defined, obviously incomplete, or even very likely invalid.  That way, I can more easily analyze my own beliefs, and with any luck, a few other people with whom these questions resonate with can analyze them as well.

Eventually I want to lay out some content that is way more personal.  (I feel like my writing flows more easily and less effortfully when I get a little more personal and less lofty anyway, but we’ll see.)  There are issues that I feel uneasy talking about that I already find myself putting off even though I’ve laid out most of the necessary framework (hopefully stating this intention now will help me to eventually follow through on it).  The evolution won’t be sudden or drastic — for instance, there’s definitely one more essay of the long-winded, cerebral, “rationalisty” type that I want to write here — but as I said, I’m starting to consciously push in the direction of personal stories and rants… And that begins with this entry, a throwback to the essay I wrote a year ago on the evolution of my attitudes through different periods of my life.  (And I’m not going to quit explicitly tying everything into rationalism just yet.)

Before I get started, I have to make it clear that this is a far cry from what anyone could call a “political article”.  This blog could never be a political blog, because, to put it bluntly, I’m somewhat of a political ignoramus relative to the writers who run such blogs, or even compared to many people of similar intelligence to me but no formal expertise in public policy.  I would love to understand more about macroeconomics, environmental policy, our electoral system, world history and events, and many other things, and I believe I have the intellectual capacity to do so (especially the more mathematical areas among these).  Yet somehow I’ve never managed to muster the necessary focus.  Clearly I’m interested in politics as a whole and many political issues (as evidenced by frequent references on this blog), but this is apparently an interest versus Interest thing.  I am, on the other hand, quite engaged with gaining an intuition for a lot on the main characters on the political stage and their personalities as well as the broad mentalities guiding the supporters of particular policies or entire parties.  This is not ideal but so far has been my best guide to understanding concrete issues and feeling sure of which sides of them to fight for.  So my journey is not one of direct understanding but of groping around trying to understand the convictions of others and why they feel as they do.

I. My apolitical beginnings

As most readers have probably figured out by now, I grew up in America.  I also grew up in a fairly liberal household, politically speaking.  I remember my first explanation of the difference between America’s two major parties being, “It’s all very complicated, but the Democrats often try to make poor people richer, while the Republicans often try to make rich people richer.”  As I got older, I asked more questions and learned that the Republicans favored tax cuts for the wealthy and tended to favor fewer restrictions on pollution by big businesses despite what scientific evidence was telling us, while Democrats stood up for things like a woman’s right to choose what she does with her own body and increased funding for education and the arts.

At that fairly young age, my skeptical thinking skills had not yet caught up to my innate “believe sufficiently developed-sounding narratives put in front of me” tendency that I’ve alluded to before.  So it’s no surprise that I didn’t subject my initial Democratic sympathies to much critical thinking.

(Warning: if you’re expecting a gripping saga detailing how I swung from this spot on the political spectrum to authoritarian populism, then the alt-right, finally stopping to rest at anarcho-communism or something like that, then you’re in for a disappointment.  Spoiler alert: I’m still mostly sympathetic to the Left.)

The first time there was a political story I really followed was the election of 2000.  At the time I couldn’t really understand how a system which allowed someone to win the popular vote but lose the election could possibly be justified, and the whole thing seemed like a ridiculous mess.  Then the September 11th attacks happened, which finally triggered a habit of following the news regularly.  I remember feeling some sense of loyalty to our then-fairly-new Republican president in the immediate aftermath, which eventually eroded as he for some reason pushed us into Iraq (I didn’t like war, and it seemed that he rushed us into it without obtaining an appropriate amount of evidence, but of course my views were still being colored by those closest to me who were pretty anti-Bush).

Then in high school I began to examine the political climate in America much more closely and critically.  In last year’s post “My Evolving View of Rationalism”, I expressed a belief that most people first form their personal worldviews in high school.  This includes position on the political spectrum (not based on what one is told by parents, etc. but deeply-held beliefs arising from honest questioning).  I remember one defining moment in American History class when I felt this happening to me.  We were watching some video about I-don’t-remember-what, and (I think) a very wealthy CEO was asked whether he considered himself greedy.  He denied it, explaining that he had created thousands of jobs and claiming that he had done more to help the world than Mother Theresa.  I was stunned, not because the man was obviously kind of a jerk (I was expecting that anyway), but because it properly occurred to me for the first time that putting more money towards big businesses might actually help the poor in some way.  Prior to that, I had never made a genuine effort to examine why so many people were in favor of tax cuts for the rich or for big businesses.  I guess I’d been leaning on the assumption that fiscal conservatives were either rich themselves or uneducated (never mind the fact that the most conservative guy I knew growing up was on reduced lunches and had a parent in academia).

It was at around the same time that I began to actually care a lot about religion and why people believed in it in contrast to my earlier religion-is-silly-and-boring stance, as I’ve described elsewhere.  And I put two and two together and realized that religion was playing a major role in politics, and that in fact the stronger sort of religion that I was especially philosophically opposed to was being embraced by the Republican party.  (Another even more significant defining moment I remember that history class is arguing with my mildly conservative teacher over same-sex marriage, when it really hit me that religious belief could lead to moral values that I couldn’t relate to at all and that these could be used to decide moral policy.)  So at around the same time I was realizing that there were other sides to the whole fiscal policy debate, my support for social liberalism was beginning to solidify.  But I remained for the time being not especially outspoken overall when it came to politics.

And then, we entered another presidential election season.

II. How America could have done better in 2004

By 2004, I had cemented myself into a certain political mould, as had many of my high school peers.  Mid-to-late-adolescence, after all, is a period of radical beliefs for many.  I was surrounded by radical Marxists, radical libertarians, radical Christian conservatives, radical anti-Zionists… so what type of radical was I?  Well, by now my budding rationalist sensibilities had instilled in me a distrust of any political ideology that claimed extreme answers to all problems, so I was determined to stay as far away as possible from the periphery of the space of political positions and maintain an openly critical attitude of everyone’s positions.  Of course, what I didn’t have the maturity to see then was that I was being at least as blindly ideological as anyone else — in fact I was essentially masquerading as a radical Centrist.  I still knew that I held a number of partisan positions deep down, but bent over backwards trying not to acknowledge them (some of this was out of a healthy concern that I might be biased towards my parents’ beliefs.  And once again, this was paralleled by how I chose to present my religious views.  I identified as agnostic, which I often defended as the most moderate, open-minded view.  But in retrospect, I was a rather militant agnostic — granted, I still am somewhat — and my attempts to dole out equal criticism to theistic religion and to straight-up atheism were pretty silly.)

And so I was no fan of George W. Bush, but when John Kerry first emerged as Democratic frontrunner, I was determined to conclude that he was probably almost as bad, despite having heard very little of what he had to say.  Then they debated, and my attitude towards him, and the whole electoral contest for that matter, changed completely.

I should back up for a moment and explain one aspect of philosophy that I was very passionate about at the time.  I had become a great follower of what one might pejoratively call “scientism“.  In other words, I valued the scientific method very highly and regarded a general version of it as the best means to reaching empirical truth.  This was the very cornerstone of my philosophical worldview and my brand of rationalism at the time.  I think what spoke to me particularly emphatically was the idea of keeping one’s mind open to all possibilities and then putting them through very rigorous testing — what Carl Sagan called “a marriage of skepticism and wonder” — which required the ability to recognize and admit one’s own mistakes.  It implied a system of self-correction which I considered to be a very beautiful concept.

I had made the connection that the American constitution was an embodiment of a similar concept (very revolutionary for its time): an system of laws which evolved through acceptance of new ideas, testing them by running them past the people; and accordant self-correction.  Of course this was only an ideal and the American government didn’t quite work this way in practice.  But the way I saw it, America was founded upon this principle, the same great principle that governed scientific research, the same concept that separated open-minded rationality from blind dogmatism.  During those years many people were arguing over what it meant to love one’s country in the midst of a war that many of its citizens didn’t support.  I knew where I stood: I loved America regardless of the decisions its politicians made, because its abstract defining ideals formed the very foundation of my creed.  And nothing was more un-American than defending whatever America did on a principle of “my country, right or wrong”.

The final weeks of the 2004 campaign season, and particularly the presidential debates, reshaped my ideas of where each major side of the current political spectrum stood with respect to my most deeply-held epistemic conviction.  On the Democratic side, we had a candidate who spoke in a nuanced way (never mind that I didn’t understand the things he was talking about half the time, what mattered to me was that he sounded oh so nuanced!), but who was routinely criticized for being a “flip-flopper”, which sounded an awful lot to me like a disparaging term for “being able to see two sides of an issue”.  On the Republican side, we had a candidate who seemed to gain appeal by stating everything in as simplistic a way as possible, whose definition of “strong leader” revolved around not questioning the course we were on, and whose overriding concern in the face of criticism was apparently “not sending mixed messages to our troops”.  As someone who wasn’t exactly terribly knowledgeable about many of the object-level issues being discussed, it seemed to me like the debates were really a contest between a philosophy of questioning for the purpose of self correction and a philosophy of maintaining strong convictions for the sake of having strong convictions.

There was a particular moment in the second debate which encapsulated this for me, in which Senator Kerry was explaining why he voted against some pro-life-based laws not because he disagreed with the general stances motivating them but because they lacked certain provisions which he thought were necessary.  He ended by saying, “It’s never quite as simple as the president wants you to believe.”  President Bush’s response says it all:

It’s pretty simple when they say, “Are you for a ban on partial birth abortion?  Yes or no?”  And he was given a chance to vote.  And he voted no.  And that’s just the way it is, that’s the vote.  It came right up, it’s clear for everybody to see.  And as I said, you can run but you can’t hide.  It’s the reality.

ep-160929119-jpgupdated201609261027maxw800maxh800noborder

This is why any account of my personal journey towards today’s flavor of online rationalism is incomplete without discussing how I was shaped by the 2004 election.

When the results of the contest came in, I was bitterly disappointed along with many others.  But I felt like one of the only ones who was disappointed not only because Bush won, but because Kerry, who had felt to me like a voice of genuine reason, lost.  And after that, I guess I sort of made peace with the fact that I felt unable to hold terribly strong or specific convictions on many political issues that weren’t social.  I had a firm feeling about what mattered the most: I was in favor of politicians who operated on open-mindedness, skepticism, and above all, humility and the ability to self-correct.  And the Democratic party seemed to take stances that better encapsulated that attitude and to house more politicians who had that quality.

For the record, I’ve since grown less naïve about Kerry: while I still believe that he was generally sincere and held consistent beliefs, it’s clear to me that he was shrewd about pandering to different groups of people.  However, I hold that Bush, his administration, and the election of 2004 marked the pinnacle of blatant anti-intellectualism in the US during my lifetime.  (Obviously we’ve just started down a new path and I’m not sure what I’ll be calling this trend in another 12 years, but as Trumpism doesn’t seem to have much of a direct relationship to intellectualism, or intelligence, or any form of coherent thought for that matter, it’s hard for me to brand it as “anti-intellectualism”.)

III. A collection of my (non-)convictions

I guess the update I’ll start with is to say that I no longer see the Left or the Democratic party as a paragon of rationalistic ideology in today’s American political scene.  In fact, I’m constantly frustrated by the extent to which left-wing rhetoric seems to be based on unreasoned emotions and aversion to self-correction.  To fully explain this point of view would require another, much longer post, but if you’re reading this, then there’s a good chance you’re not far away (in some measure of internet-distance) from blogs which delve into the flaws of today’s liberal discourse all the time.

I still feel woefully un-savvy about political goings-on and all sides of complex issues, but I do follow a particular set of heuristics which lead me to certain (still fairly left-wing) political leanings.  Below is my attempt to summarize a few of them.

First off, I knew I eventually had to link to my post on free will / determinism, with my contention that leaning towards free-will explanations versus deterministic ones corresponds in a rough way to conservative versus liberal attitudes.  I suppose it’s important to mention here that my instinct from the moment I was first exposed to the free will debate was towards determinism; this feels related to my tendencies both towards “scientism” and towards empathy.  I soon realized that the sort of determinism I favored was compatibilism, which doesn’t really contradict anybody’s concrete everyday intuition about either free will or determinism.  And yet, in concrete, everyday situations, I do feel like I lean more towards deterministic interpretations of behavior than the average person does.  This has led me to the left-wing view on many things.

Meanwhile, I have also always been somewhat of a utilitarian by instinct and have trouble interpreting ethical dilemmas using any other language.  Therefore, I take issue on a fundamental philosophical level with axiomatic-looking notions like “fairness”, “desert”, and “natural rights”, even while they are useful terms on a practical level.

I therefore strongly believe that punishment should only be used for the purpose of deterrence, not retribution.  When I was younger, I favored the death penalty for reasons of practicality; since then I’ve turned against it mainly because it seems barbaric, in practice not as humane as it should be in theory, prone to error, and rooted in a desire for retribution.  I am in principal willing for certain drugs to be “illegal” in some sense of the term because it’s easy to demonstrate that they do great harm, but I’m completely opposed to harsh prison sentences for drug offenders as this seems absolutely counterproductive to minimizing harm.  I’ve grown quite cynical about the prison system in general and would much prefer some form of mandatory rehabilitation for certain types of “crimes”.

Foreign affairs is my area of greatest ignorance (I’m truly an instance of the American stereotype of knowing a lot about my own country but little about what’s going on in the rest of the world — even recently moving abroad has not improved this much), but I have some heuristic convictions nonetheless.  I believe that the US should strive to do as much good as possible for the world (and “the world” includes America), but that we are far better able to judge and manage and micromanage what goes on within our own borders than what happens in societies far away with very foreign cultures and political situations.  It follows that interfering in conflicts taking place within other countries holds the risk of creating an even bigger mess and possible permanent occupation situation and should be approached with great caution even when there are potential major benefits to global well-being.  Probably the best type of scenario for the US to get involved in is one where there is some united oppressed group far away without the necessary resources to overthrow their oppressors.  I’m not on principle against the US throwing its considerable strength towards solving what we conscientiously consider to be great atrocities abroad.  But I don’t like the idea of America acting as the world’s police force simply because of our great military power, for the same reason that I dislike unfettered monarchy or dictatorship (what happens when the well-intentioned party with overwhelming power is wrong?)

I’m inclined to oppose any ruthless and inhumane actions partaken in the context of war or for reasons of “keeping America safe”, even though dispassionate utilitarianism does compel me to concede in theory that despicable actions towards a few which seem guaranteed to prevent the deaths of many may be justified.  Conveniently, however, harsh measures such as torture have apparently been shown to not be particularly effective.  Moreover, it is of extreme importance to consider how the rest of the world may react to ruthless practices on the part of the American military and how this may serve to further escalate conflict rather than make the world safer.  (In general, emphasis on Theory of Mind and considering how one’s actions will affect other parties’ perceptions is a big part of what guides me both in political attitudes and elsewhere.)

I still hold the process of and institution of science in highest regard when it comes to determining empirical facts, and therefore assume by default the truth of what the scientific community says regarding issues like evolution and climate change (although I’ve become a little cynical about social sciences as of late).

I continue to vehemently reject social attitudes based on conservative religious convictions such as opposition to same-sex marriage, stem-cell research, or euthanasia.  However, one “meta” level up, I don’t have a problem with the fact that some politicians are trying to legislate based on their religious convictions: everyone ought to base their stances on personal moral convictions, and these are based on religious belief for many individuals.  As long as politicians aren’t trying to justify their religiously motivated proposals with claims like “America is a Christian nation”, I don’t consider their proposals to violate the First Amendment or “separation of Church and State”.

In the arena of fiscal policy, I’m still looking to maximize well-being for the greatest number of people.  It’s clear to me that this doesn’t scale linearly with wealth, and so at least on naïve principle I’m in favor of creaming a bit off the top of the highest incomes to give to the poor or to programs which benefit the poor.  However, in the actual world it’s very plausible to me that policies which aim to bring this about may weaken the economy so that everyone is worse off.  My lack of expertise in macroeconomics is hurting me here: I’m not sure to what extent pumping money into the working and lower-middle class (who are likely to spend it all) would benefit the economy versus to what extent this is accomplished through benefits for big businesses.  My inclination for the time being is to make sure that all full-time workers make enough to live on practically (exactly how much is a nontrivial question, of course), although the alternative idea of a universal basic income interests me very much.  While I can see the attraction to libertarianism as an abstract theory and could even see myself taking libertarian stances on many issues, I utterly reject two of the arguments I most often hear for it: “poor people would become richer if they just worked harder” and its neighboring attitudes (see my deterministic inclinations above); and “Taxation is theft!” and similar statements which seem to assume some primal notion of ownership rather than regarding it as an abstract phenomenon contingent an existing State.

There are many more hotly-debated areas of policy on which I have at least some tentative opinion, but these were the main ones I thought to put down in writing at this moment.  Some of them could of course change tomorrow.

Oh, and yes, our mechanisms for self-correction are still of utmost importance in my eyes.  This of course is encoded in our First Amendment protecting free speech, and although I believe that both the Left and the Right have invoked it inappropriately at times, I take very seriously any genuine offense to the spirit of it.  Let’s move towards a norm of listening to each other and compromise or when necessary going with majority opinion in order to work together in an effort to make progress with our policies… but always with the open-minded awareness that we could be wrong.

Agency does not imply moral responsibility [the brief version]

[Content note: uncharacteristically short and sweet.]

The object of this very short essay is to concisely state a proposition and brief argument which I refer to frequently but was lacking a suitable post to link to.  This is one of the central points of my longest essay, “Multivariate Utilitarianism“, but it’s buried most of the way down, and it seems less than ideal to link to “Multivariate Utilitarianism” each time I want to make an off-hand allusion to the idea.

Here is how I would briefly summarize it, using the template of a mathematical paper (even though the content won’t be at all rigorous, I’m afraid).

Proposition. The fact that an agent X acts in a way that results in some event A which increases/decreases utility does not imply that X bears the moral responsibility attached to this change in utility.  In other words, agency does not imply moral responsibility.

Proof (sketch). One way to see that agency cannot imply moral responsibility in a situation where multiple agents are involved is through the following simple argument by contradiction.  Suppose there are at least two agents X and Y whose actions bring about some event that creates some change in utility.  If X had acted otherwise, then this change in utility wouldn’t have happened, so if we assume that agency implies moral responsibility, then X bears responsibility (credit or blame) proportional to the change in utility.  By symmetry, we see that Y also bears the same responsibility.  But both cannot be fully responsible for the same change in utility — or at least, that seems absurd.
One naïve approach to remedy this would be to divide the moral responsibility equally between all agents involved.  However, working with actual examples shows that this quickly breaks down into another absurd situation, mainly because the roles of all parties creating an event are not all equally significant.  We are forced to conclude that there is no canonical algorithm for assigning moral responsibility to each agent, which in particular implies the statement of the proposition.

Remark. (a) The above argument seems quite obvious (at least when stated in more everyday language) but is often obscured by the fact that in situations with multiple agents, usually only one agent is being discussed at a particular time.  That is, people say “If X had acted differently, A wouldn’t have happened; therefore, X bears moral responsibility for A” without every mentioning Y.
(b) A lot of “is versus ought” type questions boil down to special cases of this concept.  To state “circumstances are this way, so one should do A” is not to state “circumstances should be this way, so one should have to do A”.

Example.  Here I quote a scenario I laid out in my longer post:

[There are] two drivers, Mr. X and Ms. W, who each choose to drive at a certain speed at a particular moment (let’s call Mr. X’s speed x and Ms. W’s speed w), such that if either one of them goes just a bit faster right now, then there will be a collision which will do a lot of damage resulting in a decrease in utility (let’s again call this y).  At least naïvely, from the point of view of Mr. X, it doesn’t make sense in the heat of the moment to compute the optimal change in w as well as the optimal change in x, since he has no direct control over w.  He can only determine how to best adjust x, his own speed (the answer, by the way, is perhaps to decrease it or at least definitely not to increase it!), and apart from that all he can do is hope that Ms. W likewise acts responsibly with her speed w… If y represents utility, then our agent Mr. X should increase x if and only if ∂y/∂x is positive.  After all, he has no idea what Ms. W might do with w and can’t really do anything about it, so he should proceed with his calculations as though w is staying at its current value.

That’s what each agent should do.  I’ve said nothing about how much either of them is deserving of praise or blame in the outcome of their actions.

The proposition states that in fact without knowing further details about exactly what the two drivers did, we have no information on how blameworthy Mr. X is for the accident.


 

To state it (or perhaps overstate it) bluntly, I cite this “agency ≠> responsibility” proposition in an attempt to remedy what I believe is a ubiquitous fallacy at the bottom of many if not most misunderstandings.  I wish everyone in the Hawks and Handsaws audience a Happy New Year and look forward to writing more here in 2017!

new-year-s-eve-party-horn-free-gifs-free-animations-clipart-borders-vel7qn-clipart

Confronting unavoidable gadflies

[Content note: An elaboration of something I’ve tried to describe before.  I didn’t even try to avoid serious political issues this time.  Welfare, death penalty, generational conflict, religion.]

This is a follow-up to “Speculations of my inner gadfly“.

In my earlier gadfly-related post, I tried to describe an idea that had been buzzing around in my head for some time (pun intended?  I’m not sure) which helps to describe how I view certain types of disagreements and bad arguments.  I think it turned out to be one of my better-written entries for this blog and by some measures seems to have been the most popular.  And yet, when I look back on it, I feel like I was mostly pointing out something already obvious to everyone (despite my repeated hedging of “I don’t mean only to point out the obvious here…”) and didn’t manage to really capture of the essence of the common role of “gadfly speculations” as I see it.  This post will be in large part an attempt to clarify my ideas by taking the whole “gadfly” concept in a slightly different direction.  (By the way, most of the terminology and metaphors I’ve come up with so far for expressing my thoughts on this blog make me wince, but I think I actually like the general gadfly metaphor, so I’m going to run with it as long as it doesn’t wear out.)

I. The inevitable truth of grand-scale speculations

Before really getting into the meat-and-potatoes of this post, I need to clarify one important point.  In the other gadfly-related essay, I described inconvenient, perhaps ridiculous-sounding possibilities which may or may not turn out to be correct (and very often aren’t) but stressed that we have to face them anyway rather than brush them aside.  I pointed out that you can always evaluate their likelihood later, but it’s important to at least let them enter your conscious consideration first.  While this certainly wasn’t an invalid point for me to make, I’m afraid it may have been misleading in terms of conveying the way I usually think of “gadfly speculations”.

The fact is that most social controversies that we find ourselves considering involve large numbers of humans and their motivations, the effects that a certain course of action may have on them, and so on.  In these situations, practically every possibility that realistically occurs to us regarding the way some humans might act is correct, but perhaps only for a small minority of the humans involved.  In fact, as soon as such a speculation occurs to us, unless it’s completely bonkers at the level of lizardmen conspiracy theories, it must be true at least occasionally or at least for a few people.  In fact, it would seem very strange if it were never true.

For a real-world example, take the constant debate over government-provided welfare.  Fiscal conservatives tend to argue, or at least insinuate, that a number of citizens on welfare are using these government programs to game the system in some way.  And regardless of our political affiliations, when we stop to objectively consider this, we have to agree that in a certain literal sense this is correct.  The key phrase in the proposition mentioned above is “a number of”.  It’s not clear exactly how many people are gaming the welfare system.  Maybe they are so few as to be irrelevant when the benefits of having a social safety net are taken into account.  But if we have a country where millions of citizens are on welfare, and the welfare system is pretty complicated, then it stands to reason (or at least common sense) that there is a feasible way to abuse it and that some of those citizens are in fact abusing it.  It would really be astounding if nobody were abusing it.

Similarly, if we all assume for the sake of argument that certain sufficiently heinous criminals “deserve” the death penalty (I put “deserve” in quotes because I don’t really know what that means, but that’s a topic for another post), then we all have to admit, regardless of our stances on the death penalty, that the proposition “Some defendants will be wrongly convicted” is correct.  The key word is “some”.  This is a weaker example than the last one, since far fewer humans have been sentenced to death in modern history than are on welfare, but I still suspect that the forensic science involved is so complex and still imperfect enough even today that there must be wrong convictions at least occasionally.  I would be astonished to find out that there have been zero wrong convictions in the last several decades.

Now I realize that there are far more outlandish suggestions out there regarding every controversy that affect so many people’s lives, and maybe it’s plausible that some of the most extreme ones don’t hold for any of the humans involved.  For instance, I seriously doubt that a single one of the millions of individuals on welfare is secretly trying to trying to aid a band of extraterrestrials bent on taking over the earth through weapons which can be powered only by government-signed welfare checks.  However, most speculations this far out in left field aren’t pervasive in the common discourse and generally don’t enter our minds (even subconsciously) in the first place.

So these uncomfortable thoughts that gadflies persistently whisper to us generally don’t have a chance of being completely false.  In fact, as soon as we hear them, we are obliged to admit that it would be quite shocking for them to be entirely false.  Evaluating them becomes a question of to what degree and on how great a scale they are true.

I reiterate what I said in the other post: we tend to dismiss these inconvenient ideas out of hand because acknowledging them means more work for us in our assessment of any situation, and our brains are lazy.  If we acknowledge that at least a few folks will abuse the welfare system, then that obligates us to go through a tricky cost-benefit analysis when arguing in favor of it, which is considerably more difficult than emphasizing more and more stridently that welfare provides necessary aid to many citizens.  And yet, if we at least attempt to argue that abuse of the welfare system is sufficiently rare, then that obligates our opponents to rebut that with an attempt to show that such abuse is unacceptably frequent (rather than argue against welfare simply by complaining that it can be abused), and a potentially productive discussion ensues.

There is an anolog of this notion in the context of small-scale conflicts — say, drama between two individuals — as well: many of the possibilities that try to latch themselves to our minds are almost certainly true on some level.  For instance, if it occurs to you that the reason your friend didn’t show up to your party has something to do with an unintentionally rude remark you made to her the week before, then that is probably playing some role (however small) in her behavior, even if the primary reason for her absence turns out to be an unusually high level of work-related stress.  But this doesn’t apply in nearly as absolute a way as it does for issues involving more people.  And for the purposes of this post, it’s mostly large-scale debates that I’m interested in.

II. The inevitable use of grand-scale debate tactics

Now let’s kick it up a level: in debates which involve a large number of humans, pretty much any speculation about how the opposing side will argue must be correct.

A. The Boomer-Millenial Conflict for Dummies

Here’s a good exercise for considering how a given position might be argued: pretend that you’re an alien with no knowledge whatsoever about human history or problems but who wants to argue a particular side of a human controversy of which you know only the basic definitions of relative terms, with the minimum possible extra research.

Take, for instance, the constant rhetorical warfare between the baby boomer and millenial generations.  Suppose you were an alien knowing nothing about American culture, generational subcultures, or any of the dynamics involved.  You only know the definition of “baby boomer”: it’s a human born during the “baby boom” from the mid-40’s to the mid-60’s, which is so called because of a marked increase in the birth rate.  How would you go about attacking baby boomers?  Well, let’s see, the first thing that comes to mind is that because by definition there are a lot of them, they are to blame for what in some people’s minds might be a dangerously high population.  But you can’t go far with this criticism, because nobody can be reasonably held to blame for having been born.  So what occurs to you next?  Well, again, tautologically there are a lot of baby boomers; they make up a disproportionately large portion of human population.  So if there’s any fault that baby boomers are likely to be prone to, it might be… that they have an over-inflated sense of self-importance, or they behave as though everything is about them, or something.

And sure enough, it’s not hard to find articles like this one, or books like this (see Chapter 7).  I also distinctly remember the preachy right-leaning political comic strip Mallard Fillmore characterizing baby boomers this way (clumsily paraphrasing from memory: “This just in: baby boomers have finally realized that society doesn’t revolve around them!  Unfortunately, they now think it revolves around the federal government.”), but after half an hour of searching for old Mallard Fillmore strips with roughly those words, I can’t find it.  And yes, if I google “baby boomers”, the first attack articles I find are ones which accuse baby boomers of ruining the economy for millenials, since a lack of jobs for young people is the biggest specific issue at play in the inter-generational war right now.  But one has to admit that the hypothetical alien who knew nothing about our current economic woes did a pretty good job at coming up with an anti-baby-boomer talking point which is actually used substantially in the real world, given a bare minimum of knowledge regarding the baby boomer generation.  The “think everything revolves around them” allegation isn’t the primary criticism nowadays, but it is still relevant in the discourse.  That talking point may not usually be backed up by explicitly claiming the source of their perceived self-importance is that there are disproportionately many of them.  But the fact that baby boomers comprise a prominent demographic certainly strengthens the credibility of the “think everything revolves around them” criticism.

So if one who is looking to defend baby boomers goes through the above exercise, the result is a gadfly speculation on opposing debate tactics rather than the facts of the generation-war issue itself: “But the opposition might try to frame things in terms of baby boomers thinking everything’s about them!”  And this turns out to be true, to some extent.  For any controversial issue about which many people are arguing in public from all different sides — or even when only two people are debating, but both are passionate and knowledgeable about many aspects of it — any hypothetical talking point that comes to mind in this way will play at least a minor role.

I like the baby boomer example because one can already come up with a possible criticism by considering only the definition of “baby boomer”.  Usually it requires knowing more than basic definitions, but only a little more.  For instance, if you want instead to attack millenials, and imagine yourself as an alien searching for a good anti-millenial talking point based on a minimal amount of research, one only has to learn about one of the main issues involving millenials today: they complain about a dearth of jobs and general broke-ness.  Now forget the specifics of what they’re complaining about, and ask yourself, what’s the easiest route to discrediting someone who complains?  By claiming that they feel entitled, of course (see below).  Or how does one go about lampooning someone who has trouble finding a job just generally falls into some kind of bad fortune?  By portraying them as lazy, or irresponsible, or lacking in judgment or initiative, etc.

B. General examples

Here are some broad examples of opposing rhetorical tactics which are bound to show up, each of which applies to a variety of real-life debates.

  • “This media outlet / group has a pro-X bias!” vs. “Reality has a pro-X bias!”: I’m starting with this one because I think it might be the most pervasive of all of my examples.  If one party complains that the media or a particular outlet of it is biased in some way, then regardless of specifics, the most obvious strategy for rebuttal is to claim that its portrayal of the situation reflects how things really are.  This is particularly visible in conservative criticisms of the media (or particular news outlets) as having liberal bias, which instigates the response that “reality has a liberal bias”.  It is also a prominent feature of the evolution vs. creation debate, as well as other disputes between skeptics and defenders of academic consensus.  When one party makes an accusation of bias, their opposition is pretty much guaranteed to counter that the source isn’t biased but right.  The flip side of this is, of course “This high-profile source says X is true!” vs. “That source must be biased then!”
  • “We have a legitimate grievance!” vs. “You’re just a bunch of whiners!”: This is the hallmark of debates that hinge on reverting to deterministic or free-choice explanations for a current unfortunate situation.  Closely related is the inevitable attack of “your bad fortune is your own fault” aimed at the aggrieved.  There are too many real-world controversies involving this for me to name here, and in fact I’ve tried to argue before that this is a component of all Left-vs.-Right political issues in America.  Nowadays the concept of “privilege” and related terminology usually shows up throughout these disputes.
  • “We got here by hard work!” vs. “You got there by unfair advantage!”: The flip side of the above rhetorical template.  Also frequently seen in disputes over privilege and free choice vs. determinism.
  • “We deserve better!” vs. “You’re just entitled!”: Also closely related to the grievance/whiners exchange.  If one isn’t up for countering that the other party’s bad fortune is manufactured because they’re looking to complain or just their own fault anyway, then one can take this route.  Whatever “entitled” even means.
  • “Our lived experiences have made us wiser!” vs. “Your lived experiences have made you paranoid / naïve!”: I’ve seen this show up in a lot of more personal conflicts — by claiming experience as evidence of wisdom, one opens oneself up to suggestions that experience can distort one’s perceptions to one’s disadvantage as well.
  • “Person/group X sounds overconfident / refuses to admit mistakes!” vs. “Person/group X is just really smart / hasn’t made a mistake!”: This is a variant of the example above.  I remember it being a major theme of the discourse last decade during the Bush administration.  A further variant is “Person/group X is closed-minded!” vs. “Person/group X just won’t put up with nonsense!”  These stances are often taken by the “teach the controversy” anti-evolutionists versus the “creationism isn’t science” defenders of Darwin’s theory… although interestingly the roles were pretty much reversed back at the time of the Scopes Trial.
  • “You’re afraid to debate!” vs. “We won’t descend to your level by engaging with you!”: Closely related to above.  Another major component of the creation/evolution conflict (yes, creation/evolution provides many good examples).  Epitomized by Richard Dawkins’ refusal to debate the “ignorant fool” Ray Comfort.  However, I’ve seen show up in the context of many other topics where one side sees itself as far more educated than the other.

C. Debating debate tactics: the “motte-and-bailey” debacle

Some of the common recurring themes mentioned above come close to describing not only potentially fallacious tactics used to debate an issue but even to debates over potentially fallacious debating tactics.  It seems not uncommon in discussions between rationalists for one party to accuse the other of a committing a particular fallacy — say confirmation bias, or assuming a strawman — only for the other to point out that sometimes what looks like confirmation bias or a strawman happens to reflect the truth anyway.  To show that I don’t always fail at finding cartoons posted online that I remember reading once, here is a relevant Calvin and Hobbes panel (apologies to Bill Watterson).

calvinhobbes

If someone argues using language that sounds overly-broad, it’s almost certain that their opposition will accuse them of the fallacy of black-and-white thinking.  But in some way or another, the first party will very likely retort, like Calvin in the panel above, that sometimes that’s just the way things are.  (By the way, Watterson has stated that this cartoon was inspired by his own struggles in a legal dispute in which he was accused of black-and-white thinking.)

To give a more interesting example of something that caused some disagreements within the rationalist community, in one of his more popular posts, Scott Alexander characterized certain types of rhetoric as relying on a fallacy that he calls “motte-and-bailey”, which refers to equivocation between one very convenient sense of a term (assumed most of the time) and a different but much more defensible sense of that term (adopted whenever challenged).  The “motte-and-bailey” terminology was actually coined in an academic paper written years earlier, but Alexander’s article popularized it within the online rationalist movement.

Some months later, his fellow rationalist essayist Ozy banned the use of this concept on their blog Thing of Things, later writing this to further elucidate the potential pitfalls of using “motte-and-bailey”.  Evidently the term was being abused a lot in Thing of Things comments sections.  But here’s the conundrum: any new concept can be abused in some way.  When introducing a new concept, even the concept of a certain logical fallacy to an audience comprised of rationalists, one should always be able to imagine the ways it will be abused and recognize that given a large enough audience, it will be abused in that way.  In the case of “motte-and-bailey”, it is a good exercise to ask ourselves what might be the most convenient way to use it to attack any position one doesn’t like.  Well, the substance of the concept is that a “motte” is a defensible definition of a term which can be quickly adopted when one’s ideas are challenged (“God is the feeling of purpose we perceive in the universe”), while a “bailey” is a convenient definition tacitly assumed otherwise (“God is the petty, vengeful main character of the Old Testament”).  The point is to criticize one’s opponent for defending their ideas by using a defensible (“motte”) definition which they don’t assume the rest of the time.  So it seems all too tempting to… criticize one’s opponent for using a defensible definition even when they do consistently assume it all the time.  (Maybe you’re arguing against a very liberal theist who really does believe only in the “vague purpose” kind of God, and Old Testament fundamentalism is a strawman of their belief system.)  So in other words, exactly the abuse that Ozy described having seen.

If you introduce a new rhetorical concept to a bunch of rationalists, there’s a pretty good chance of somebody invoking it unfairly to attack arguments they don’t like; then there’s also a pretty good chance that someone else will anticipate the possibility of this abuse and unfairly invoke that to attack arguments they don’t like; and the recursion goes on ad infinitum.  Maybe “motte-and-bailey” also happens to be easily abusable to begin with.

But all that doesn’t mean that useful concepts like “motte-and-bailey” shouldn’t be popularized in the first place.  And I guess that brings me to my usual “proposed solution” section of this essay.

IV. How to oppose opposing gadflies

I’ve tried first to make the point that when participating in discourse on certain types of broad issues (particularly social), almost any statement inconvenient for our position that might occur to us is probably true to some degree and moreover will occur to at least some people on other sides who will use it against us.  This makes my view of success at discourse, or even being sure what one believes in the first place, sound pessimistic.  And it is, somewhat.  Becoming reasonably sure of something and being able to actually convince others of it in an intellectually honest way is (at least for me) very, very hard.  But there are still ways of dealing with those gadflies that almost surely oppose us.

First of all, there’s one of the oldest debating guidelines in the book: anticipate opposing arguments.  I spent a lot of time illustrating certain very general types of claims that are sure to be encountered (“your grievance is your own fault”, “so-and-so sounds confident because they in fact are always right”) because, despite the fact that they sound completely obvious when written down in this context, many people in the heat of argument often don’t see them coming because they’re not thinking enough from their opponent’s point of view.  So anticipate them.

The second, and probably more difficult, tactic is to realize that these inevitable counterclaims are probably at least a little bit true and to readily acknowledge this.  That’s not to mean that constantly bending over backwards to agree that every criticism and accusation is kinda-sorta valid is an effective way to win anyone over to one’s position (I err in this direction a lot, so I would know).  But flatly denying that the offensive thing one’s opponent was bound to suggest is almost certain to make things worse.

So the best strategy is probably to admit that our opponent’s suggestion is probably correct for a few people, or just a little bit, and claim (and then make an honest effort to back up the claim) that our position is right anyway.  “Yeah, any welfare system opens itself to the possibility of abuse by a few people, and that’s awful.  But it’s far more important for honest people in need to be able to have a safety net of this kind, because X, Y, and Z.”  Or, “yeah, that group sometimes whines a little more than justified, but they have a legitimate complaint even so because Y and Z.”  Or even, “Yeah, I know that I can moan and be a little melodramatic at times, but that doesn’t mean that my feelings are invalid in this case, because X.”

This is particularly worthwhile, but particularly tough, when one is confronted (or anticipates being confronted) with a personal attack.  There’s a common reaction, which I’ve observed in people close to me, of “On top of being completely wrong about [issue on the table], he has the nerve to keep bringing up such-and-such personal flaw of mine.  He’s lost all credibility with me about [issue], so the personal attack is obvious nonsense.”  (Here the personal fault in question is often something that many have criticized the speaker about and which maybe even the speaker has acknowledged in calmer moments.)  In my opinion, this is almost always the wrong way to look at the situation.  If I’m arguing with someone in my life about Big Important Issue on which I believe they’re totally mistaken and out of line, and they keep shoving in my face some criticism of me that others have made in some way or another, and which I’ve previously acknowledged is somewhat true then… I try to recognize that they’re probably right in their criticism.  They wouldn’t be using the criticism as a weapon to argue their side of the Big Important Issue if it weren’t somehow readily available to them, and it wouldn’t be so available to them if it weren’t somewhat true.  So my response should be to acknowledge immediately that “yeah, I sometimes can be that way” but argue that my faults still don’t imply their side of the Issue, or (in some cases) that they’re completely irrelevant and being used easily but unjustly as a weapon against me.  Of course I still fail at this from time to time, but my successes have gradually made admitting my own faults in this way much easier.

The thing is that no matter how small of a gadfly is staring us down, our adversary can still hide behind as long as we dismiss it even while it tells just a tiny bit of truth.  Engaging with the gadfly actually exposes our adversary and leads to a more productive outcome for everyone involved.  And that is a bit more of my take on why it’s important to welcome gadflies into our minds.

A Principle of Empathy

[Content note: Donald Trump and the election (not the main focus).  Enough said.]

The Principle of Charity is an idea that seems to be touted fairly regularly by members of the rationalist community. Scott Alexander is especially well known as an advocate of it and even devoted the first post on his now very popular blog Slate Star Codex to declaring the Principle of Charity as the ethos of the new blog.  It more or less says that in examining another person’s viewpoint, one should strive for the strongest, most reasonable possible interpretation of their argument, in particular not assuming that they’re being stupid or completely irrational.  I’ve seen related terms used a little more loosely (“I don’t think you’re interpreting her words very charitably”) so as not to apply strictly to intellectual debating scenarios.  The general idea is closely related to the practice of steelmanning.

When I first discovered the internet rationalist community and looked up what the Principle of Charity was, I took it as further confirmation that I had found “my people”.  I recognized it as not only an argumentative tactic I fervently believed in, but as somehow a core part of who I was and a personal characteristic that guided me in my interactions with people.  Today I want to explore a little more closely how the principle speaks to me so strongly, as well as how I might revise it to something which reflects my temperament even better.  In doing so, I may in fact be treating a rather broad strawman of the Principle of Charity rather than the bare essence of the thing itself, but I feel somewhat justified in doing this as our principles often become a little broad and strawman-like when we actually put them into practice.

I. Understanding my charitable instincts

And you overlook Dumbledore’s greatest weakness: he has to believe the best of people.

– Severus Snape, in Harry Potter and the Half-Blood Prince, by J. K. Rowling

Those who know me in real life (which presumably isn’t anyone who is reading this, although who knows) find me a bit frustrating from time to time because of my way of argumentatively defending others who have committed offenses.  I say things like “they were probably just trying to Y” or “I’m sure they didn’t mean anything as bad as Z” or “I agree that doing X was wrong, but it’s really difficult for them because of V and W”.  I get told on a regular basis that I have a strong tendency, to a fault, towards “giving everyone the benefit of the doubt” or “seeing / assuming the best in everyone”.  This is perceived as extreme enough to be qualify as a fault because it leads to me being easily manipulated / pushed around… as well as for the oftentimes more immediate and obvious reason that it causes me to argue with my friends on behalf of third parties who have committed offenses and clearly don’t deserve to be defended.

I’m not sure exactly what to say about this component of my personality, except that by and large I haven’t tried to change it because I continue to believe that generous assessments of other people’s behavior have been proven correct on average throughout my life’s experience interacting with humans.  (To be fair, maybe this belief depends entirely on further assessments of other people’s behavior which continue to be too generous.)  Sometimes I overestimate the good intentions behind people’s actions, and sometimes I am too credulous of narratives being related to me, and that has led me into some toxic situations.  I really don’t know exactly how best to calibrate my good-intent-ometer in such a way that I avoid being taken advantage of while continuing to model reasonably correct views of the world.  To explore that in writing would require a whole other blog entry falling into more of a “self-therapy” category.

But clearly, the fact that I tend to assume the best of people, and that I believe that such assumptions on average turn out to be accurate while holding that villainizing others tends to be destructive both for good debate and personal conflict-resolution, has led me to find the Principal of Charity a pretty attractive idea.

However, when listening to feedback given to me over the course of my life on this personal feature of mine, perhaps what strikes me the most is the nature of those which take the form of compliments.  People tell me that I’m “nice”.  Part of this I’m sure alludes to my tendency towards politeness to other people’s faces and even behind their backs, but a lot of it seems to come from an impression that I “see the best in everyone”, which sounds roughly equivalent to “believing everyone is good” or “holding unusually high opinions of everyone”.

I’m really intrigued by this because I think it’s a fundamentally mistaken impression of the way I am.  I don’t hold the other human beings in my life in particularly high esteem.  I like a lot of those around me a lot of the time, and yet there are some days and even whole weeks when I feel incessantly irritated with everyone and with humankind in general.  (Granted, I keep most of these thoughts to myself, as I’m very confrontation-averse and go out of my way to avoid any kind of drama.  Maybe that qualifies as “niceness” or maybe it’s just cowardice; you tell me.)  As far as I know, these occasional misanthropic moods are nothing abnormal, and I wouldn’t say that I hold the other human beings in my life in particularly low esteem either.  Taking the mean over my opinions of everyone I interact with, I estimate that the height of my opinion is not much greater or less than than that of most anybody else.  What’s different is the variance: where most people perhaps think very well of some and very badly of others, my opinions of almost everyone fall somewhere in the middle.  I don’t mean that I go around saying, “Meh, I feel the same so-so feeling towards everyone”; I feel very fond of a lot of people close to me but in my more pensive moments view them as creatures shaped by genetics and environment which happens to have put them in a position of positive impact on my life.  I tend to concoct excuses and/or unpleasant circumstances for the bad things that unsavory people do, but I also tend to concoct selfish motives and/or fortunate circumstances behind the good things that highly respectable people do.

Why do I process personal events this way?  Maybe I just have a strong tendency towards deterministic explanations for everything.  Maybe my reason for leaning towards deterministic explanations is that I badly want to understand what makes other people tick, and assuming libertarian free will amounts to throwing up my hands in the face of the mystery of why others act as they do.  Maybe this is related to the major importance I place on Theory of Mind — I wanted to attach a link to the phrase “Theory of Mind” there, but I haven’t written that post yet; for now, this article provides an introduction.

But I’ve come to realize that although my habit of interpreting the motives behind a lot of questionable actions charitably might be described as applying a Principle of, well, Charity, that doesn’t work as a unified explanation my full mindset in dealing with other people.  I’ve become aware that my first priority is not necessarily to be charitable or sympathetic, or to assume the best, or to give everyone the benefit of the doubt all of the time; it’s to understand.  This makes some objective logical sense: after all, if one’s ultimate goal is to know the truth, then full understanding rather than bias towards believing positive things seems like the way to go.  And so even though the celebrated Principal of Charity is obviously something I’m generally in favor of, it may not most closely reflect my personal creed.

II. The best of people and the worst of people

One of the difficulties in applying the Principle of Charity all the time — and again, this isn’t exactly a rebuttal against the original notion so much as a doubt I’m raising about the general mindset that comes with it — is that it can sometimes become tricky in practice it to fully apply it to multiple sides of an issue at one time.

Suppose you are a relationship councilor and Alex and Beth are in your office explaining each of their sides of a conflict which threatens to destroy their relationship.  Alex is very angry with Beth for having cheated on him.  Beth explains that to some extent they had always had an open relationship.  Alex disputes Beth’s interpretation of exactly what kind of “openness” they had actually agreed to in the relationship.  Beth disputes Alex’s interpretation of this as well as to what degree her behavior constituted “cheating”.  There is some disagreement on concrete physical events and exactly what was said or done when, but more of the disagreement is over interpretations of things that had been “understood” between Alex and Beth.  Your job here, inasmuch as it involves directly resolving the conflict rather than just facilitating better communication between your clients, is tricky.  Applying charity by assuming the most reasonable possible motives behind each person’s point of view seems like a good idea and may be sufficient to fully resolve the problem.  But depending on the circumstances, it may ultimately lead to contradictions: maybe the more charitable you are in interpreting Alex’s words, the more uncharitable you are forced to be towards Beth, and vice versa.  Maybe adopting a model of one (or even both) of them as just a manipulative jerk ultimately fits the evidence better than being as charitable as you can to both of them just up to the point of reaching a complete impasse.

That illustration was kind of vague and maybe not even that realistic, so let’s move from hypothetical personal situations to actual political ones.  For as long as I’ve been following politics, I’ve forcibly avoided demonizing politicians.  Yes, they generally don’t come across as the best of people, but maybe one really has to act with some level of dishonesty in order to make a difference through the political process.  If a politician stood on a platform I strongly disagreed with, I assumed they just held different values at different priorities from me or interpreted facts differently from the way I did (or had access to different sets of facts), rather than assuming that their stance was based on malice.  I figured that if only everyone treated these figures as charitably as I did, then our political discourse would become far more productive.

Then along came a certain non-politician political candidate whose apparent moral bankruptcy evaded all of my early attempts to apply charity.  That man is now the president-elect of the United States.

(I’d like to mention here that I had the beginning of a draft of this essay sitting in my WordPress account, bearing the current title, before I even started writing my recent post on the rationality of voting and therefore well before the election.  I was already planning to bring up Donald Trump.  Then, with the election rapidly approaching, I decided to hurry up and write the essay about voting in time to publish it before the big day.  I figured I would finish this post next and apologize for bringing up Donald Trump, since obviously everyone would be sick of hearing about him following Hillary Clinton’s victory.  But the election didn’t quite go as I foresaw, and we’re all going to be constantly hearing about Donald Trump for a long time to come whether we want to or not, so what the heck.)

Anyway, as the long campaign season unfolded, I found myself less and less able to excuse Mr. Trump’s outlandish remarks, even though my initial instinct had always been to treat him with just as much charity as I had always given to every other candidate.  I had to ask myself, if I had no particular bias against him, why did I appear to be treating him differently from almost everyone else?  And then I realized that it wasn’t really charity that I had been employing to evaluate other political candidates: it was a determination to understand them as completely as possible.  And with Mr. Trump, I had been embarking on the same quest: I wanted to see the inner workings of his mind and exactly what made him speak and act in the ways that he did.  And the model that began to form was that of an ignoramus who held no serious convictions on anything except for his own desire to seek glory through general bullying behavior while feeling vindicated by every success along the way, however absurd.  Now under this model, certain uncharitable interpretations became inescapable for me.  When he made a quip about what those second-amendment people might do if Clinton became president, was he really just joking about how that crowd is just really strong and determined when it comes to fighting for their second-amendment rights?  Could he really have been innocently confused due to a bad earpiece when asked how he felt about David Duke’s support of him?  Did he really mean [insert a dozen other things here]?  Come on.

If I continued to apply charity by accepting every single one of Trump’s explanations for every reprehensible thing he said, it would somehow feel like a violation of common sense.  And eventually it might lead to much dicier issues.  I’m not saying that charity towards Donald Trump necessarily directly implies anti-charity elsewhere, but it does kind of seem to go hand-in-hand with uncharitable interpretations of his detractors’ criticisms of his words and actions.  Scott Alexander made some good points in his recent Slate Star Codex post following Trump’s victory, but a lot of it struck me as an effort to bend over backwards to take a charitable possible attitude towards our president-elect which ironically resulted in rather uncharitable interpretations of some major anti-Trump talking points.

Note that today I don’t care to actually analyze and defend my beliefs on any of these features of our recent election and its aftermath — to do so would require another post of its own, longer than this one.  The reader is free to disagree with me completely, but I ask them to nonetheless accept my reality regarding Trump as a hypothetical situation which illustrates something about the limits of the Principle of Charity.  A lot of what I took for an instinct to be charitable was actually an instinct to be empathetic, and while a lot of the time that results in positive assessments of people, or at least excuse-making, sometimes it results in my realization that their motivations are actually reprehensible and that they don’t deserve excuses.  Charity is always beneficial to the object (while potentially to the detriment of other parties involved in the same debate), but empathy can cut both ways by exposing the best of people and the worst of people.

III. The risks and rewards of empathizing

I propose that we reform our Principle of Charity into a Principle of Empathy.  This Principle of Empathy is not a repudiation of the old Principle of Charity, but rather an evolution of it, one which will lead us closer both to objective truth and to the most understanding possible society.  And given recent events which threaten to polarize our discourse even further, I believe that the goal of striving to be empathetic will be, if anything, more difficult but also more crucial than ever going forward.

I don’t claim that being highly empathetic on a personal level is not without its risks.  I have reason to imagine that I operate on incredibly high levels of empathy, perhaps abnormally intense levels.  I’ve noticed that this is often not only to my detriment but to the detriment of those around me.  For instance, if the suffering of someone close to me is too much for me to handle so that I feel forced to shut them out, then I’m really not being as good a companion to them as if I provided support while managing to remain stronger and less affected by their adversity than they are.

I also see risks in publicly defending others through empathetic reasoning, which is one reason why thus far I’ve generally stuck to empathizing with them in my own mind or behind their backs.  It can become very delicate to stand up for someone on the basis of what you perceive to go on in their minds, both their strengths and their weaknesses, without coming across as a totally condescending prick.  Compare an attack of “What Bob did is completely inexcusable because of A, B, and C” to a defense that sounds like “What Bob did was wrong, but I can understand how he did it given that he’s been through X and Y and this appears to have resulted in him lacking the emotional strength to face up to Z.  Even though the perfectly rational decision would have been W, it was evidently really hard under the circumstances for him to be rational and so he made the wrong choice.  Please show him some forgiveness.”  I imagine that the Bob here might actually feel more angry and hurt by the defense than by the attack.  (Or if one is using the flip side of empathy to instead condemn Bob for sinister motives, he would probably be angered more by this type of condemnation than by an argument based in the external fact of his action having been wrong: “How dare you assume that you know me and the way I think and feel!”)

And yet, I see these both of the issues described above as ones of execution only.  For the former, I have to learn how to feel empathy in the most productive way possible; for the latter, one has to gain the skill of producing diction that conveys a tone of genuine solidarity rather than condescension.  My viewpoint in theory remains unyielding: it is the duty of each of us to go forth and empathize!

Obligatory election-day post on the rationality of voting

[Content note: Again, the title pretty much says it all.  Minor discussion of religion-inspired ethics.]

There are a number of rhetorical situations where I see recurring patterns of what feels like obviously fallacious reasoning and have learned that trying to convince someone who doesn’t instinctively sense that same pattern will lead only to frustration on the part of both parties.  But in many cases, I have discovered through the rationalist community a group of people who all seem to acknowledge the same underlying issues, even if there’s plenty of healthy disagreement on exactly where and to what extent those fallacies are being committed and as to what antidote should be applied.  Some of these things I’ve even tried writing about in my own words, such as the mistake of confusing causal agency with moral responsibility in multivariate situations or the subconscious tendency to not acknowledge inconvenient hypotheses.  I can’t exactly take a poll of how everyone reacts to these rationalist topics that I bring up, but it certainly appears that most people who are interested in rationality and have the patience to engage in discussions of them are in rough agreement despite perhaps disagreeing with how I describe or apply things.  It hasn’t proven controversial to claim things like “There’s a fundamental problem with how people assign moral blame in situations where more than one party created a disaster” or “One shouldn’t shun inconvenient thoughts before they have a chance to fully form” or even more philosophically contentious positions like “By debating the degree of ‘free-ness’ of certain actions rather than what our reaction to them should be, we are asking the wrong question.”

I have recently discovered that such is not the case when it comes to my rationality-motivated objections to how many people think of voting.

A few months ago, I brought up my contention that people often seem to abandon consequential utilitarianism when it comes time to vote on a Slate Star Codex open thread.  I posted the following comment:

I’d like to put in a request for a post (preferably sometime between now and the election) on the motives behind abandoning consequentialist utilitarianism when it comes to voting. It seems like most people accept consequentialist utilitarianism as a matter of course for most choices, but then treat voting almost as a mode of self-expression.

In case it’s not clear, I was alluding here to my long-time frustration with those who say they’ll vote only for candidates they positively like, rather than for candidates who are able to win or the lesser of two evils, etc.

At the time, I was assuming that everyone would basically agree with me but point me towards a good explanation or at least a better way of phrasing the problem.  To my surprise, I found that my assumptions were completely mistaken regarding the general rationalist community sentiment when it comes to voting, or even when it comes to consequentialist utilitarianism.  As one commenter said,

If you think that people are “abandoning consequentialist utilitarianism when it comes to voting”, then that doesn’t just mean you’re completely confident you’re right about the consequentialist utilitarian consequences of voting, it also means you think that reasoning is so obvious that you expect everyone else to think the same way. This is absurd. Even in this thread there is a broad range of opinions on this matter.

I learned a lot from the responses I got to the above-linked comment, and other online discussions on optimal voting strategies that I’ve witnessed since have further opened my eyes to the variety of viewpoints rationalists hold on this general topic.

A lot of the crux of our differences can seemingly be traced back to different takes on variants of Newcomb’s problem.  I decided after the aforementioned discussion on Slate Star Codex that I would research Newcomb-like problems and try to further cement some sort of opinion on it along with solid justification, in time to write an incisive, well-argued, polished blog post on the rationality of voting in time for the presidential election.  However, I failed to do my homework here and have not made much progress on understanding the different points of view on these topics.  Therefore, once again I don’t quite have the incisive, well-argued, polished blog post that I wanted and have decided instead to make do with an attempt to succinctly write down my current thoughts maybe from a more personal angle.  Maybe this is for the best, because sometimes I suspect that indefinitely delays in an effort to do the ideal amount of research and thinking will lead to me writing something that still falls short of feeling ideally incisive, well-argued, and polished, while I often wind up happier with my more personal, thoughts-in-progress writing anyway.

So here are the main issues which seem to play into the question of what it means to vote rationally, along with my and other people’s thoughts on them.

I. The assumption of utilitarianism

I’ve embraced utilitarianism as the only reasonable source of ethics since I was old enough to ask myself what my source of ethics was (which I guess was around high school or so).  I realized pretty quickly on discovering the rationalist community that utilitarianism, specifically consequentialist utilitarianism, seems to be the dominant belief within it.  Results from surveys such as this one seem to bolster this impression, but note that this survey shows 60% of the participants as being consequentialists, which leaves a lot of room for other views to be influential.

In the aforementioned comment thread alone, there was plenty of argument against my assumed consequentialism, which if nothing else convinced me that there are many more people with a commitment to rational thinking who don’t find it obvious than I had imagined.  Unfortunately I don’t quite understand most of these people’s points as arguments for a different, coherently-stated system of ethics.  It seems that many want to point out that humans do not in reality make most of their decisions according to consequentialism.  Most decisions, they claim, are impulsive and depend mainly on what “feels better” at the spur of the moment.  Maybe the reason why a lot of people vote is simply that it gives them a vague feeling of power in having a voice in their democracy.  In other words, they believe in the advice of journalist Bob Schieffer’s late mother.

0c150e92049317c409eea478cb96a4c5

My first reaction to this is that here, by claiming that consequentialism isn’t valid because it’s not how people actually make decisions, these commenters seem to be advocating a purely descriptive definition of morality.  For me, the obvious problem with this is that it ultimately leads to confusion between moral behavior and the way people actually behave on average.  Here I’ll leave it to the reader to insert whichever go-to example they prefer of crimes against humanity committed at a particular place during a particular time period in order to show that this notion is absurd.

But maybe nobody is claiming that common human decision-making behavior actually determines which ethical framework is valid.  Maybe their point is that the tendency of folks to act according to (non-utilitarianism-based) impulse in most aspects of their lives shows that they way they think about voting doesn’t contradict their ethical worldviews in the way I brought up in the open thread comment.  After all, if humans don’t in fact generally rely on consequentialism to make their decisions, then there’s no apparent contradiction when they say they’ll vote in whichever way makes them feel better or for whichever candidate better reflects their values.

To respond to this, I have to go back to the ultimate reason why I identify as a utilitarian, which I’ll do my best to explain briefly even though I can’t give an ironclad argument in its favor.  (Although, one shouldn’t expect a complete “proof” of any ethical system, since concepts of “rightness” and “wrongness” can’t be introduced without some axioms.)

The best personal explanation I can come up with is that utilitarianism seems like the only system for deriving ethical statements that has a completely coherent and self-contained definition, modulo the somewhat open-ended concept of “well-being”, or utility.  Therefore, when we humans consciously justify our decisions, we tend to imply in our explanations that we made the choice which led to a net increase in utility.  When we argue about whether our decisions were right or wrong, it boils down to conflicting opinions about which outcomes actually increase/decrease utility, even as the assumption that we all want to maximize utility is taken for granted.  So even impulsive decisions like choosing to stay in bed an extra twenty minutes after one was supposed to get up are either not justified at all (“I shouldn’t have stayed in bed late, but my tiredness just sort of took over”) or justified as having increased utility (“I stayed in bed late because it felt better for me, and it was worth it because of X, Y, and Z”).  I’m not saying that such decisions are made in the first place according to utilitarianism.  I’m saying that if they are consciously justified afterwards, they will be implicitly justified as actions which were likely to result in the greatest net change in well-being.  In my opinion, this is because such justifications form the only chains of reasoning which remain completely meaningful.

Yes, some people very deliberately take a non-utilitarian stance.  For instance, many believe in a god or gods as the source of all morality, and hold that “God forbids it” is reason enough not to do a particular thing.  But when pressed on exactly why God would forbid that particular thing, either the chain of reasoning must stop at “He/She/They has mysterious ways” or some sort of argument which appeals to something apart from the divine (“God says that stealing is wrong!  Why does He forbid it?  Well, how would you like to be robbed of things which you worked hard to get?  [etc.]”).

So yeah, I do think that most people, when they are calmly thinking over their own choices and not in the midst of acting impulsively, instinctively rationalize what they do in utilitarian terms.  They choose not to steal because it would do harm to the person stolen from, as well as contribute to societal instability where private ownership is concerned.  They choose to recycle because it’s better for the planet which in turn benefits every living thing on it in the long run.  They might even prefer a certain political candidate because their policies would be better for the economy and therefore increase the well-being of people within their constituency.  So my initial concern still stands: why do so many seem to back away from this sort of rationalization when considering their voting behavior?

(I’m happy to admit by the way that I see certain limitations in utilitarian reasoning, especially when it comes to issues involving creation or suppression of life.  Therefore, I don’t believe that this system of ethics provides good answers to questions relating to, for instance, abortion, or population control.  I’m not sure whether that means that I’m not fully a utilitarian, or whether one could derive some enhanced set of utilitarian axioms which would solve these problems.)

II. The assumption of one-boxer-ism

A lot of the rationalists I’ve been hearing from do seem to be on the same page as I am with regard to consequentialist utilitarianism, but still disagree with me on the purpose of voting.  They say that if the only reason for voting were to directly influence a current election, then there wouldn’t be much reason to vote from a utilitarian standpoint, since your one vote has an astronomically low chance of single-handedly swinging an election.  “All right,” one may ask them, “so why do you think so many people do take the trouble to vote, and do you feel that they are being reasonable in doing so?”  One plausible answer to this may be that voting still serves a practical purpose apart from directly determining elections as elections serve the function of polling the desires of the people.  If you vote for the candidate whose values you truly agree with, even if they are not one of the main candidates, that helps to send a message to the community of politicians which will surely do some good in the long run.

While I agree that voting does serve this purpose, and it might even be my main consideration if for instance I lived in a solidly non-swing state of the US, I still hold that a lot of the time it is trumped by the purpose of directly swinging current elections for the reason which I articulated in the afore-linked comment thread:

[P]eople mostly seem to understand the whole Prisoner’s Dilemma idea that if you decide to do something for a reason, then you should assume that many other people are making that same decision for that same reason, and that en masse voting is extremely effective.

In other words, I strongly believe, or at least some instinct inside of me compels me to strongly feel, that I should act in such a way that the best outcome might be brought about if all other like-minded people also act in that way.

It turns out that attempting to justify this strange conviction that one should act as one would like all like-minded people to act is tricky and runs into potential paradoxes.  This conundrum is encapsulated in Newcomb’s Paradox (of which the famed Prisoner’s Dilemma is a variant).  Like I said above, I haven’t gotten around to researching any of the volumes of argument on both sides of this problem.  I have read Eliezer Yudkowsky’s introduction, and someday I hope to take a look at his lengthy paper on it.  I would worry that only having read Yudkowsky’s analysis might have biased me towards his one-boxer position, except that it’s sort of clear that deep down inside I’ve been a one-boxer all along.  This is because the one-boxer position is the one corresponding to the “cooperate” choice in the Prisoner’s Dilemma, or the “vote so that like-minded people also voting that way would achieve the best outcome” choice in our Voter’s Dilemma.  And even though on close inspection it seems very non-trivial to justify, I see now that my whole life I not only felt convinced of it down to my bones but had been assuming that all reasonable people believed felt the same way.  In other words, it never occurred to me that anyone would argue against the notion that voting is good on the individual level because there are positive consequences when large groups of people vote a certain way, just as littering is bad on the individual level because there are negative consequences when large groups of people litter.

Currently the topic of Newcomb-like problems occupies roughly the same position for me personally as the topic of free will did about 8 or 10 years ago: it’s a problem for which I feel some strong intuition but haven’t yet managed to wrap my mind around all the implications or formulate a clear position and which I firmly believe has highly relevant real-life implications.  Applications to how to vote rationally are an obvious example of them.  See, for instance, this article which more or less argues a more sophisticated version of my position.

But yeah, I feel this way on a instinctual level, so deeply that I’ve been willing to put in significant time and effort in figuring out how to vote from abroad and why my faxed-in ballot apparently wasn’t legible on the first take and so on… all out of this weird faith that my willingness will somehow “make” other people currently in my situation find the same willpower.

But intelligent people don’t all think the same way in Newcomb-like situations.  This fact helps to explain a lot of attitudes about voting which appear irrational to me, and thus does give a partial answer to my original query.  Of course it does not help me to truly understand how such attitudes aren’t still, well, irrational.  Understanding that may require me to change my strongly-felt-but-vague positions on things like Newcomb’s paradox.  I don’t know whether this is an impossible feat or whether a clever enough argument (along with my becoming a clever enough person) would be enough to accomplish it.

III. “Immoral” voting

There is another small aspect of the “vote only for candidates you actually like” attitude where I think I can offer a little more insight.  I have noticed that some people go beyond just saying they don’t want to vote for any candidate that doesn’t meet their moral standards; they claim in fact that it’s downright wrong to vote for someone you don’t genuinely like.  I’ve heard language like “going against my morals” used to describe holding one’s nose and casting a ballot for the lesser of two evils, sometimes by those who choose to do it anyway.

I first want to be a little on the pedantic side and fault those who think that lesser-of-two-evils voting is immoral but wind up doing it anyway for being inconsistent.  Technically, I don’t see actions as being absolutely ethical or unethical in and of themselves; it is choices of certain actions over other actions or inaction that can be labeled as “right” or “wrong”.  If something is immoral, then that means that one shouldn’t make the choice to do it, period.  Or, to state the contrapositive: if one chooses to do X, then that means that X is more moral than other available actions or inaction, and therefore one’s choice was moral.  And although this criticism doesn’t directly apply to those who believe that voting for the lesser of two evils is immoral and then don’t do it, I think it still underscores some of the fuzzy thinking behind a lot of the sentiment against lesser-of-two-evils voting.

Secondly, in trying to put myself in the mind of someone who thinks of voting for a detestable candidate in order to oppose someone even worse is “going against their morals”, it occurred to me that there’s some sneaky variant of the “causal agency implies blameworthiness” (related to “is-versus-ought”) fallacy going on here which I made a point of in my post on “multivariate utilitarianism” (you have to scroll all the way down to subsection III(D), sorry).  It’s tempting to feel that if you voted for a bad presidential candidate, then you share some portion (however tiny) of the blame for them winning.  After all, you made a free choice which contributed to an unpleasant result which would not have occurred if you and other like-minded people hadn’t made that choice.  But that’s ignoring the fact that a decision between two undesirable options was foisted on you by circumstances, circumstances which were caused by other parties.  And so the brunt of the blame shouldn’t necessarily fall on you.  In fact — and this is one key difference between this situation and the ones I discussed in the post linked to above — you had no better options, so really none of the blame should fall on you.  Still I suspect that the idea that it’s inherently immoral merely to vote for an unattractive candidate has some of the same misconceptions underpinning it as the whole “causal agency implies blameworthiness” thing has.

IV. My endorsement on how to vote in 2016 (and in general)

It’s finally time to stop beating around the bush.  I chose the words of this section heading carefully: I want to describe how I think one should vote in elections in general (at least in countries like America which have a strong two-party system), not whom to vote for.

Here at Hawks and Handsaws, we are firmly against imposing our own personal political convictions on readers.  Therefore, I will illustrate an example application through a purely hypothetical situation.  Let’s say that we have a presidential election in which one candidate, whom we will denote by H, is a shrewd and very able politician mired in a corrupt political establishment who has a lot of potential skeletons in their closet and who is somewhat hawkish and not especially idealistic, in contrast to another politician we will call B who was their main opposition in their party’s primary election.  Let’s say that the opposing candidate in the general election is someone whom we will call D, who has never been a politician and generally proves themself to be a complete buffoon by repeating mostly-nonsensical platitudes with almost no actual substance behind them which yield not the slightest evidence that they understand anything about the challenges faced by their countrymen, who might be more hawkish than their opponent but you can’t really tell because their platform seems to be all over the place, and who on top of that has risen to popularity within a certain subset of the electorate by repeatedly producing outlandish bluster seemingly calculated to fan the flames of anger and bigotry.  Let’s say that you dislike both candidates H and D, but have to admit that D would be a considerably worse president than H would, although you would have strongly preferred B.  Then I recommend the following:

  1. Rewind back to the primary election that took place in your state between H and B.  You should vote for B in that primary if and only if they seem like the best choice after taking several things into consideration, including B’s likelihood of beating whomever the opposing party nominates, as well as B’s probably effectiveness at president.  You should not base your choice purely on the fact that B seems like a better person with better values.
  2. In the general election, no matter how much you may hate H, as long as you’re convinced that D is substantially worse, you should vote for H unreservedly and with a clear conscience.  No voting for third-party candidates even if their values align with yours much better than H’s do.  And no avoiding the polls altogether.  As a general rule, whenever you perceive a significant difference in attractiveness of candidates in an election, from the one-boxer utilitarian standpoint, voting is always imperative.
    (Note: this general idea is often articulated as “remember, a vote for a third-party candidate is a vote for D”, which is incorrect not only literally but also in the sense that really, a vote for third-party is equivalent to half a vote for D or to throwing away one’s vote altogether.  By symmetry, members of the pro-D camps will often claim that “a vote for third-party is a vote for H” when again it makes more sense to consider it as half a vote for H.  The fact that both can’t be true simultaneously is itself proof that neither should be taken quite at face value.  But obviously I agree with the underlying sentiment.  (Further note: of course I’m making the simplifying assumption in all of this that all we care about is directly affecting the current election; as I’ve acknowledged above, there are times when it makes good sense to vote third-party.))

The purpose of voting is not to serve as a form of self-expression, or of cheering for the team that you like.  It is not (in America, at least) even primarily a way to communicate to the political world what your ideal candidate or platform would be, except in certain circumstances where the overall result is a foregone conclusion.  The purpose of voting is to influence which individual out of a very small group of finalists will be elected to a position of significant power.  Yeah, I know that what I’m preaching is based on convictions which I haven’t been able to fully justify.  But even in the absence of solid argumentation, I’m still allowing myself to stand on my soapbox and proclaim how I feel about voting, on the eve of what looks to me like a pretty crucial election for America and for the world.

And with that, I leave you with a variation on the wisdom of Bob Schieffer’s mom: go vote; it’ll make you feel like a good one-boxer consequentialist.

My philosophy on how to create family auxlangs

[Content note: this essay detailing my opinions on how to go about a certain type of artificial language construction also turned into an incomplete exposition of two of my conlangs-in-progress.  It therefore became quite long.  Many of the invented words I use in these examples were decided on the fly and are nowhere near set in stone.]

I. What I mean by “family auxlangs”

Today I want to switch from my usual topics to conlanging, which has long been a hobby of mine.  My favorite constructed languages to work on are the sort which I imagine to be naturally spoken languages in some alternate universe.  I particularly enjoy creating families of languages which have their share of illogical quirks, partly explained through their own histories, in the way that real languages do; these are often called “artlangs”.  However, I’m also very interested in the idea of “auxlangs” — that is, languages constructed with no pretense at looking natural, but for the purpose of easing international communication.  The most famous auxlang of all time is, of course, Esperanto, but many more have been created in an effort to improve upon its effectiveness.  Creating an world auxlang seems like a rather daunting task, but I’ve always thought it would be fun to create what I call a “family auxlang”: an auxlang intended to facilitate communication between speakers of languages in a particular family (Romance, Germanic, Slavic, Bantu etc).  I couldn’t find a name for this type of conlang.  I did recently run across the term “zonal auxlang”, but that seems to imply slightly different intentions: that the language be learned by everyone in a certain geographic area, regardless of all languages spoken there belong to a particular family.  I would hate to have to invent a zonal auxlang for Scandinavia — imagine having to accomadate characteristics and vocabulary of both North Germanic languages and Finnish!)

In this endeavor, the two language families I’ve focused on are Romance and Germanic, because they are the only families of languages about which I have extensive knowledge.  Creating a Romance auxlang seems somewhat easier, given that the Romance languages are quite close-knit and have a common ancestor which is not prehistoric; we have an extremely clear description of Latin and don’t need to rely on much guesswork there.  The most obvious strategy for creating a Romance auxlang is to concoct a much simpler and easier-to-learn version of Latin, and this is indeed more or less what has been done over and over.  Many Romance auxlangs have been put forth, the most famous of which is perhaps Interlingua.  I always had some disagreements with how things are done in Interlingua, particularly the lack of grammatical gender and verb conjugation, although I understand the thinking behind this lack of grammatical complexity (to be fair, Interlingua might not really fall under my definition of “family auxlang”, since it was designed to be very accessible to speakers of English, which is not a Romance language and lacks grammatical gender and complicated verb conjugation).

I find the art of creating Germanic auxlangs to be more interesting and less clear-cut.  Years and years ago (I can’t remember how many), I ran across a Germanic auxlang for the first time.  I’m pretty sure it was Volkspraak or something with a very similar name.  Anyway, I just remember looking through the proposed grammar and vocabulary and thinking over and over that there were so many decisions in making this language that I disagreed with and would do differently.

(To be clear, whenever I refer to something conlang-related that I “disagree with”, I don’t mean that I’m actually opposed to someone else doing that thing or that my overall attitude is one of looking down upon it.  In the particular realm of auxlangs, it may well be that a decision a creator makes that I “disagree with” is objectively more helpful to their goal, and/or that in the first place their goal is different from my mine would be.  Our disagreement is only in the simple sense that I would do something differently, perhaps mainly for aesthetic reasons.)

Anyway, I remember thinking, back when I first looked over Volkspraak, that if I ever got Project Get Online going and had my own conlang website, I would write a little essay outlining the guidelines I would follow in creating family auxlangs, using my ideas-in-progress for a Romance auxlang and a Germanic auxlang as examples.  As it is, Project Get Online only gradually began to gain steam in the last couple of years, culminating in this blog.  While I still don’t have a website for my conlangs (which have mostly fallen into disrepair anyway), I thought I would write down my ideas on family auxlangs anyway.

I should mention that in preparing to do this, I decided to look up Germanic auxlangs again to see what was new.  I am surprised at how few of them can be found online, given that the internet continues to grow and the hobby of conlanging is booming.  I did find Volkspraak again, except now it’s broken up into a number of separate projects (“dialects”), because apparently it was always a group effort but there are (unsurprisingly) a lot of disagreements over how things should be done.  I was intrigued that they do tend to follow a lot of the guidelines I’m about to suggest, and I don’t think any one of these “dialects” is as “bad” as I remember Volkspraak being when I first discovered it long ago.  The only other Germanic auxlang I found which really intrigued me is Frenkisch, mainly because it’s well-developed with a nicely laid-out grammar which you can access here.  While Frenkisch looks beautiful in its own way, I’m pointing to it as an interesting example of a conlang that definitely does not do things the way I would do them, on several fronts (starting with the relatively obscure vowel sounds, French-inspired orthography, and relatively high level of morphological complexity).

With that out of the way, let me dive into my several basic guiding principles with examples from my not-yet-fully-invented Romance and Germanic auxlangs.  These conlangs are nowhere near fully fleshed out, and every example from them is extremely tentative and only reflects the general ideas floating around in my head.

II. Principles of family auxlanging

1) Avoid complexity if it is not shared by all (most) languages in the family.

At first glance, this is a no-brainer: if part of our goal is to create a language which is easy for people to learn, we should avoid needless complication.  Nearly all auxlangers make at least some substantial effort in this direction even if it isn’t always their top priority.  But it becomes a bit unclear how far this principle should be taken as it begins to compete with the other principles below.

Grammatical gender is a feature not shared by all Germanic languages (English and Afrikaans don’t have it).  So there is no grammatical gender in my Germanic auxlang.  Although Latin had many noun cases, none of the modern Romance languages decline their nouns for case at all (except Romanian, the “dark horse” of the Romance family, and the only one which I don’t know very well), instead using prepositions (something like de “of” to indicate possession, for instance).  So there is no declension for noun cases in my Romance auxlang.

Most of the Germanic languages, meanwhile, have lost their case declensions, except for the possessive case which is usually marked by an -s ending.  As I understand it, even this is falling out use in some of the Low Germanic languages like Dutch, but probably even speakers of those languages are somewhat familiar with it and wouldn’t have too much difficulty with learning a simple rule for possessive endings.  Thus, in my Germanic auxlang, there is no case inflection for nouns except to indicate the possessive case, which I’ll explain under (2) below.

Furthermore, while verbs conjugate (a little) for person and number in many of the modern Germanic languages, some of the Scandinavian languages as well as Afrikaans don’t have it at all.  Moreover, even the Germanic languages which do conjugate their verbs extensively don’t have the habit of dropping subject pronouns, and I don’t think they would particularly miss verb conjugation if it disappeared.  Therefore, I see absolutely no reason to include it in my Germanic auxlang.  Several Scandinavian languages (like Swedish and Danish) tend to end their present tense indicative verbs in -r, so why not end such verbs this way in my conlang as well?  It’s a simple enough rule that it won’t be too difficult for non-Scandinavian-language speakers to learn.  If verken is the infinitive for “work”, then we have

ek verker, du verker, he/se verker, vi verker, ji verker, di verker
— I work, you (sing.) work, he/she works, we work, you (pl.) work, they work

This idea extends to phonetics as well.  Latin had long and short versions of the vowels a, e, i, o, and u.  The modern Romance languages generally lack this distinction, either by having lost it altogether (in the case of long vs. short a) or by transferring the distinction to diphthongs.  As far as I know, Portuguese may have simply lost the long/short contrast of a, e, and o without diphthongization.  Spanish has greatly simplified the vowel system to just /a e i o u/ with no contrast of long/short quality, and Italian and Portuguese have nearly done the same (with a little added subtlety).  Therefore, it makes sense to me that a Romance auxlang should have the simple vowel system /a e i o u/ as in Spanish, with a few diphthongs allowed but kept to a minimum.

With the Germanic languages, things are again more complicated — all of them have a lot more than five vowel phonemes, and they mostly do have some kind of long/short distinction — so I feel that it’s necessary to have long and short forms of a, e, i, o, and u, and allow some common diphthongs such as /ai/ (e.g. “stone” and “home” might be stain and haim.  By the way, there are several ways to indicate vowel length orthographically that are seen in Germanic languages, but for the moment my favorite is through acute accents on the long vowels; at least it’s easy to do this on WordPress, so it’s what I’m sticking with for now: “house” is hús.)

I see no need in either auxlang to include phonemic distinction of consonant length, e.g. Italian’s double consonants.  It just isn’t shared by most of these languages.

2) Include complexity if it is shared by all (most) languages in the family.

This idea is a little more tricky and controversial, and more likely to lead to disagreements between individual conlangers who are trying to follow it.

All languages of both the Romance and Germanic families inflect nouns for number — that is, they distinguish singular from plural.  So the respective family auxlangs should also have this feature.  We just need to make sure that it’s done with a form of suffixing which is easy to implement.  For the Romance auxlang, this is obvious: Spanish, Portuguese, and French all pluralize most of their nouns using -s.  So let’s make sure that all nouns in our Romance auxlang end in vowels (easy to do, given Romance phonology — if we get stuck, we look to Italian, which really does end all of its native nouns in a vowel).  This is a super easy rule to learn, even for speakers of languages like Italian which don’t pluralize their nouns this way.  Thus we have amico “friend”, amicos “friends”, casa “house”, casas “houses”, etc.  So far, nothing surprising.

Now you’ll notice that I said there would be no grammatical gender for my Germanic auxlang without saying what I would do for the Romance one.  That’s because I want it to have grammatical genders (masculine and feminine, not neuter of course, which didn’t survive much beyond classical Latin!)  Since all of the modern Romance languages divide their nouns into masculine and feminine, the principle of “include complexity if shared” applies here, and we should have a easily-identifiable distinction (based on Italian, Spanish, and Portuguese) between masculine and feminine nouns in the conlang.  One will usually be able to tell a noun’s gender by the vowel it ends in, unless that vowel is e; e.g. amico, libro, and cane are masculine, while amica, casa, and voce are feminine.  Moreover, I propose a system of definite (and indefinite) articles which reflect the gender and number of the noun they modify (unlike for the Germanic auxlang, where in view of English and Afrikaans it makes sense to have only one form de), as well as adjective inflections which match the noun inflections.  Thus, lo cane bono, los canes bonos for “the good dog”, “the good dogs” and la casa bona, las casas bonas for “the good house”, “the good houses”.

Now I get the impression that a lot of auxlangers disagree with this, and indeed this is definitely not how things work in prominent auxlangs like Esperanto or Interlingua.  Indeed, a much bigger component of what we might call “the traditional auxlang philosophy” is to keep things as simple as possible in order to facilitate learning.  My reasons for (often) wanting to go against this principle when all languages in a family have a shared feature boil down to two things.  One: if the auxlang is only meant to be learned by speakers of languages in a particular family, then what’s the harm in including a feature already present in their native tongue?  (I admit this is quite a weak argument, but still claim that there’s not much harm if it’s executed properly.)  Two: some things, like uttering a Romance-sounding sentence without article-noun-adjective agreement, just feel wrong.  At least I strongly feel this way, even though I’m not a native speaker of any Romance languages.  I remember squirming at the way the definite article as well as the adjectives in Esperanto end in -a while the nouns all end in -o; it just doesn’t feel right to say la bona libro for “the good book” instead of something like lo bono libro.  And information-theoretically, a rule saying that articles and adjectives must change for gender and number with endings that reflect the noun endings doesn’t add much more complexity.  It would probably be quite difficult for, say, a native Spanish speaker to get used to not changing them, even though such a system might be a tiny bit simpler.

So, for similar reasons, I’m going to go a step further and proclaim that in the Romance auxlang that we’re developing, verbs should be conjugated for person and number.  Without verb conjugation, it just wouldn’t feel like a Romance language, and we wouldn’t be able to drop subject pronouns (which, to be fair, French can’t do, but French is an outlier in this respect).  Full verb conjugation is much trickier to pull off in an auxlang, and I probably wouldn’t attempt it if it weren’t for the fact that at least in Italian, Spanish, and Portuguese, the inflections look very similar.  We end up with something like this (in a similar vein, I’ve included two verb classes whose infinitives end in –are and –ere, a simplification of the three or more classes in actual Romance languages).

amare — “to love”                                                        vendere — “to sell”

amo — I love                                                                   vendo — I sell
amas — you (sing.) love                                              vendes — you (sing.) sell
ama — loves                                                                   vende — sells
amamos — we love                                                       vendemos — we sell
amates — you (pl.) love                                               vendetes — you (pl.) sell
aman — they love                                                         venden — they sell

I would of course try to minimize irregularities in verb inflection, but I’d make exceptions for cases where regular conjugation would feel bizarre to a speaker of any of the Romance languages (e.g. “to be”).  There will of course be other tenses and full conjugations for them as well, but again they can be set up to be much more regular and easier to learn than in any of the real languages.  In particular, I notice that Interlingua lacks a subjunctive, which again I would prefer to include as I believe it would be very strange not to be able to use subjunctive constructions to express certain things in a Romance language.

Now  we return to our Germanic auxlang: how do we inflect nouns here?  Well, it’s not quite so clear what a good plural suffix should be, but I’d go with –e (pronounced as a schwa); most nouns will end in consonants, but this can be suffixed even to vowel stems.  And, even though the possessive case is being phased out of a few modern Germanic languages, it just feels wrong not to include it, as long as it’s done using a very easy rule.  My choice at the moment is –es (or –‘s) for the singular possessive, and –er for the plural possessive.  Thus, we end up with something like de hús for “the house”, but de dór de húses for “the door of the house / the house’s door” and de dóre de húser for “the doors of the houses / the houses’ doors”.

As for inflecting adjectives to agree with nouns, I’m not so sure, but I’m leaning towards probably not.  If we do, we should definitely use the same endings as the nouns have, but I’m not sure this feels any more natural to a Germanic language speaker than no adjective inflection at all.

3) Allow multiple options if they are understandable to all

Sometimes it’s hard to know whether to err on the side of (1) (in the sense of avoiding needless complications) or of (2) (including special features if they are common enough).  For example, all Germanic languages have a long list of strong verbs which change a vowel in the stem rather than using dental suffixes to indicate the past tense or form the past participle (e.g. sing, sang, sung in English).  It would feel unnatural to most Germanic language speakers to use dental suffixes for many of these words — for instance, singed for “sang”.  So following (2), it seems as though we should put a class of strong verbs in our Germanic auxlang, where past tense and past participles are marked with certain vowel changes and must be learned separately.

But there’s a lot of inconsistency among the Germanic languages as to which verbs are strong or weak (or mixed) as well as exactly what the vowel changes are (which is natural, given that vowels are particularly volatile under phonological evolution).  Afrikaans doesn’t even have strong verbs; there’s not enough verb inflection to make for much irregularity at all outside of wees “to be”.  So to include strong verbs in the auxlang doesn’t really seem to be in the spirit of (1): it would imply substantial information for each Germanic language speaker to learn, and the phenomenon isn’t even shared by all the languages in the family.

In cases like this, I say we shouldn’t be afraid to allow two possibilities: regular inflection, and a special strong verb inflection.  After all, in no natural language is it the case that there is never more than one correct grammatical construction for something.  And it should create less difficulty to err on the side of allowing speakers to choose between several options what they’re most comfortable with and requiring the listener to be able to understand more things, rather than forcing speakers to remember how to use a construction they’re not comfortable with.

So in this case I would want to include the option to inflect any (or almost any) verb regularly, say ek singer, ek singde, ek har gesingt for “I sing”, “I sang”, “I have sung”, while also including alternate forms like ek sáng for “I sang” and ek har gesong for “I have sung”.  Here are several more examples.

  • The Romance languages have a lot irregular past participle forms of verbs, which many would feel uncomfortable avoiding and which help to form a lot of Latin-based nouns which are familiar to English speakers as well as Romance language speakers.  Still, it seems like an awful lot of new forms to force everyone to learn, especially when not all Romance languages always agree on which ones should be regular.  So, as with the case of strong Germanic verbs, we should allow the option to either inflect regularly or to use certain natural-looking irregular forms.  For example the past participles of vedere “to see”, facere “to do/make”, trahere “to draw”, corere “to run”, and morere “to die” could be regular vedito, facito, trahito, corito, and morito or the more natural visto, facto, tracto, corso, and morto.
  • All Romance languages have cognate words for “sea” descending from Latin mare.  But some treat it as a masculine noun (Italian, Spanish) while others treat it as a feminine noun (French, Rumanian).  So we should allow it to be either gender in our conlang (lo mare and la mare both correct).
  • There’s some variation among the Romance languages, or even between dialects of a single Romance language (Spanish) as to how to pronounce the consonants c and g before front vowels.  So I would allow anything within the range of /t∫ ∼ ts ∼ s/ for c before e and i, and anything within the range of /ʒ ∼ dʒ ∼ dz/ for g before e and i.  This is done in Interlingua, and is one of the features there that I do agree with.  We can allow similar phonological flexibility in our Germanic auxlang — for instance, maybe final g‘s in words like dag “day” could be pronounced as either /g/ (as in Norweigen) or /x/ (as in Dutch), or post-vocalic h‘s in words like naht “night” could be pronounced as /x/ (as in the non-English western Germanic languages) or indicated only through aspiration or drawing out the previous vowel.
  • Some Romance languages and most Germanic languages have a simple past tense, which is formed by inflecting the main verb without using any auxiliary verb, while all Romance and Germanic languages have a present perfect tense which requires an auxiliary verb.  English has both (“we worked” and “we have worked”), which are used to convey different senses of the time at which the action took place.  Several other languages in these families (German, French, northern Italian) don’t use this tense except in a literary context or to describe events from the distant past, and use only the present perfect in everyday speech.  I propose that we include a construction for both contexts in our family auxlangs, and if someone is uncomfortable with the simple past, they don’t have to use it: vi verkde, vi har geverkt for “we worked”, “we worked / we have worked”, and parlaron, han parlato for “they spoke”, “they spoke / they have spoken”.
  • A variant on that last idea: some Romance languages (like Spanish) use only the auxiliary verb meaning “to have” to form the present perfect, while others (like Italian) only use it for certain types of verbs and for other types use “to be” as the auxiliary verb (e.g. Italian for “she has come” is è venuta “she is come”, while in Spanish it would be ha venido “she has come”).  So let’s allow either construction for such verbs in our Romance auxlang: “she has come” could be é venita or ha venito.  I can tell you, as someone who learned Spanish and then switched to Italian, that it was quite difficult to get used to using “to be” in certain present perfect constructions, but on the other hand quite easy to understand when others used it.

4)  Start with the most recent ancestor language as a default source.

This is probably more controversial than the other things I’ve suggested.

Every auxlanger has to face the question of how to come up with the roots used in the vocabulary of their languages (as well as the actual morphemes used for grammatical inflections, but this is a less overwhelming task).  There are several ways to go about this.  One is to base each root on a word in any particular language which you like the sound of or which seems easy to pronounce or remember (this is kind of what was done in Esperanto).  Another is to consider a set of languages and, for each word, form a root by sort of taking an average of the translations of that word in your set of languages, perhaps weighted towards the languages with the most speakers.  I get the impression that this has often been the tactic of family auxlangers.  It seems like a reasonable idea: the vocabulary should be equally easy for speakers of each language in the family to learn.

But what I like better is to stay as close to the most recent common ancestor language as possible when constructing word roots.  For the Romance family, this is obviously Vulgar Latin, while for the Germanic family, this is some prehistoric ancestor language which many linguists have tried to reconstruct.  The House Carpenter has been kind enough to refer me to a long manuscript by Ringe and Taylor which thoroughly outlines what the vocabulary of this proto-Germanic language might have looked like and what sound changes have taken place in its transition to Old English.  I still haven’t gotten around to studying it, though, so a lot of the words I show in examples here are my wild guesses of what best reflects the hypothetical vocabulary of the proto-Germanic toungue.

But anyway, to give an example in the Romance conlang, I would put the word nocte for “night”.  This is despite the fact that none of the actual Romance languages, as far as I know, have retained the /k/ sound in their word for “night”: Italian, Spanish, Portuguese, French, and Romanian have notte, noche, noite, nuit, and noapte respectively.  If I took an “average” over the set of words for “night” in the major Romance languages, I’m not sure what it would come out to be — perhaps something like note — but it certainly wouldn’t have a c in it.  Yet I’m choosing to stick with the root of Latin noctis.

In fact, there are fairly regular sound changes among the Romance languages that apply to the /kt/ combination in the root, all of which (unfortunately) happen to result in losing the /k/.  There are many other regular sound changes like this.  For instance, intervocalic /t/’s in Latin have softened to /d/ or /δ/ in Spanish and Portuguese and have disappeared altogether in French; e.g. Latin vita “life” has remained the same in Italian but become vida in Spanish and Portuguese and vie in French.  I’m proposing to go with vita even though clearly “on average” these languages don’t preserve the hard /t/ sound (this goes for morphemes like -ato which form past participles).  Similarly, in many Germanic languages (though not the Scandinavian ones), there has been a lot of palatalization of k before front vowels; e.g. /sk/ became a soft /∫/ in Old English.  I prefer to be conservative and keep the non-palatalized k‘s: the words for “church” and “should” might be kirk and skolde.  (Of course, we shouldn’t get ridiculous about it to the point of clearly violating our other rules.  Almost all Germanic languages, with English and Icelandic as exceptions, have shifted the old /þ/ sound to /d/, and in fact many Germanic language speakers seem to have difficulty pronouncing /þ/ at all.  So we go with the common sound change in this case.)

I have several arguments in defense of looking toward the common ancestor, or at least against other algorithms.

One: it often seems tricky to determine the “average” of all the cognates for a particular word: in the above example, we might arrive at note, but it could also be something like noite.  We will never agree on how to determine which combination of sounds is the most equidistant from the words in each language of the family, but a common ancestor language gives us a natural instantiation of equidistance: its word roots are separated from the modern ones by an equal amount of time, even if some members of the family tended to evolve more quickly than others.

Two: if we try to take an “average” of all the cognates to determine each word root, or worse, just randomly determine which cognate to choose our root, we end up with a lot of phonetic (and occasionally even morphological) inconsistency.  If, in considering what our word for “dog” should be, we look at the cognates in Italian, Portuguese, and French (cane, cãe, and chien respectively; note that Spanish doesn’t have a cognate), then we may well end up wanting to use a nasal vowel.  Yet probably many other roots we end up with will end in n without nasalizing the vowel (e.g. from the cognate words for “wool” we might end up with lana).  At least if we stick with drawing vocabulary from a single ancestor language, we’re unlikely to end up with such inconsistencies or any confusing homophones.

Three: One of the main objections to this paradigm might be that choosing roots and morphemes in this way creates an unfair burden for speakers of the less conservative, more innovative languages in the family.  My response is that, while this complaint is more or less valid, the unfairly-distributed difficulties would be almost exactly the same if there were no family auxlang and the speaker of the less conservative language were trying to learn how to communicate in other languages of that family.  Yes, monolingual speakers of more innovative languages like English and French are a bit disadvantaged under my suggested algorithm, but they would have to learn a lot of the same difficult features anyway by studying, let’s say, German and Italian respectively.

Four: Building on that last point, in the case of the Romance family at least (this may apply more subtly for the Germanic case, but it’s less obvious in the absence of a heavily-borrowed-from ancestor language), some of the ancestral roots may be familiar to speakers of each language anyway.  To take the example of nocte (from Latin noctis) for “night”, that /kt/ still shows up in words like Spanish nocturno and French nocturne “nocturnal”.  Similarly, the /t/ in vita “life” still shows up in Spanish, French, and Portuguese vital “vital”, and so on.  Being somewhat familiar with a lot of the ancestor roots via Latin borrowings makes it a lot less difficult to learn to use them all the time, and might even provide the opportunity to make connections between fancy Latin words and the everyday words from one’s native Romance language.

And five: I just strongly prefer drawing from the ancestor language on an aesthetic level, and that’s enough reason for me to make any conlanging decision, even if it’s regarding an international auxiliary language.

III. Our Father…

As is traditional in any good exposition of constructed languages (even the ones that are only halfway constructed), I want to close with a prayer.

Our Father who art in heaven,
Hallowed be Thy name;
Thy kingdom come;
Thy will be done on earth, as it is in heaven.
Give us this day our daily bread.
And forgive us of our debts, as we forgive our debtors.
And lead us not into temptation, but deliver us from evil.
For thine is the kingdom and the power and the glory forever.
Amen.

The Romance auxlang version:

Nostro Patre che es en lo celo,
Tuo nome sia sanctificato;
Tuo regno vena a Te;
Tuo volontate sia facto en la tera, como é facto en lo celo.
Danos hodje nostro pane djornale.
E perdónanos nostros débitos, como nos perdonamos nostros debitores.
E no nos enducas en temptatjone, ma libéranos de lo male.
Perche lo Tuo é lo regno e la potentja e la glória per sempre.
Amén.

And the Germanic auxlang version:

Ons Fader dat er in de himel,
Dín nám vese gehailigt;
Dín kuningrík kome;
Dín vil vese gemákt an de érd, sva er gemákt in de himel.
Gife ons dis dag ons daglig brod.
And forgife ons ons skolde, sva vi forgifer dem dat skolde ons.
And ne léde ons in forsóking, már frie ons fra úfel.
For dín er de kuningrík and de máht and de hérlighód in evighód.
Amén.