My reaction to Rationality: from AI to Zombies

[Content note: The title pretty much says it all.  Like with my last “reaction” post, I prefer to think of this more as a smattering of stray thoughts on the book than as a thorough review.]

Around a month ago, I finally managed to finish Eliezer Yudkowsky’s “Sequences”, compiled by Rob Bensinger into a book called Rationality: from AI to Zombies, after slowly plowing through it over the course of a year or so.  Some of the essays I didn’t follow very well; this was mainly due to my bad concentration when reading but exacerbated by the fact that in some essays the content was rather dry and abstract and not particularly easy.  There were other essays that I read and understood well but which didn’t provoke any particular reaction from me.  However, there were many points where I did find myself with a number of thoughts in response to what Yudkowsky was arguing.  Unfortunately, I can’t easily remember where and what all these points were, especially the ones from, say, the first thousand pages.  What I really should have been doing was noting down each of the topics and arguments which I found particularly intriguing as well as my reactions to them.  But my discipline for note-taking has never been good, and I’m afraid that if I had held myself to diligence at writing things down after reading each section or so, then I never would have gotten through The Sequences at all.  So the best I can do now is to skim back through the whole document and jot down my reactions to various things here, which I will do below.

I first want to explain what lead me to read this mammoth work in the first place.  First of all, I had become very interested in the writing of several rationalist authors, particularly Scott Alexander, who explicitly referred to The Sequences quite often and was probably making more subtle allusions to them which I was missing (this turned out to be true).  Secondly, I was hoping to begin interacting more with these writers’ associated rationalist community, and at the time, “Read The Sequences!” was the most common suggestion in response to my desire to better understand the things they were talking about.  But at the end of the day, even if I still hadn’t heard of this community or its particular members, I would still be interested in any extensive work focusing on human tendencies to argue or make decisions badly as well as more general philosophical questions, without requiring the reader to have a strong background in philosophy.  I was deeply interested in the area of rhetorical fallibility in the first place but had never seen the subject explored as deeply as Yudkowsky apparently did.  And of course, the fact that this group of essays sparked a fairly large online intellectual movement only added to the intrigue.

There are many other aspects of The Sequences which were not reasons why I was interested in them.  For instance, I had no particular desire to learn more about AI and had no background in that area, although I’ve gradually begun to find it more intriguing.

Anyway, in view of what I was hoping to get out of The Sequences, I was initially a bit disappointed and found the insights of the earlier sections to be somewhat anticlimactic.  My feeling was that Yudkowsky was mostly making points that were dry and obvious, and that although he illustrated them very articulately, his writing lacked the emotion and the colorful references to real-life situations that Scott Alexander’s writing had.  It wasn’t particularly hard reading, but it still felt like a bit of a slog, and by the end of Book II or so, I was seriously considering calling it quits.  There were a few points in the first two books which I found more engaging (and will comment on below), but for the most part, I only gradually began to feel hooked starting in Book III.  In retrospect, this is not too surprising: the book seems to have been arranged so that the earlier sections are more “low-level” and less involved, and besides, reading a book always feels a bit more tedious when ones sees after hundreds of pages that one is, say, only about a fifth of the way through the whole thing.  So to anyone having a hard time getting through the earlier parts of Rationality: from AI to Zombies, I recommend either sticking it out for a while or skipping straight to Book III.

Anyway, without further ado, here are my comments on more specific things.

  • #16: “Religion’s Claim to be Non-Disprovable”: After having so far read pretty much nothing but persuasive essays whose conclusions seemed to be just obviously true, this was the first point in which my attention was really roused.  I have to admit that for many years I liked to claim that religion is neither provable nor disprovable.  At some point a while back, I amended that claim to “Some specific religions which make claims about the empirical world are disprovable, but more generic religious ideas aren’t.”  This is what Eliezer seems to be arguing here, but I still feel that he ignores the distinction between my second claim and the negation of my earlier claim.  It is the start of a common thread in which he refuses to treat religion itself as nothing more than a belief in the supernatural — I suppose not without reason, but I find it a little hard to relate to.
  • #51: “The Fallacy of Gray”: Yes!  Yes, yes, yes.  I had a particular debating partner whom I often criticized for this fallacy, but I never knew this name for it and never came up with Yudkowsky’s elegant takedown of “You argue that two shades is too simplistic, so why do you replace them with one?”
  • #55: “0 And 1 Are Not Probabilities”: This was one of the first times that Yudkowsky introduced me to a completely novel and elegant way of looking at something familiar, which I hadn’t thought of before.  In some contexts, 0 and 1 should not be considered probabilities, for the same reason that infinity shouldn’t be considered a number.
  • #61: “Are Your Enemies Innately Evil?”: I enjoyed this whole section a bit more than I enjoyed the rest of books I and II.  The content didn’t feel very novel, but this is only because I was already very familiar with Scott Alexander’s writing on these issues, which pretty much didn’t exist when Yudkowsky wrote these essays.  I selected this particular essay only because it exemplifies the spirit of the whole section especially well.
  • #128: “Leave a Line of Retreat”: Almost from the moment that I published my last post about “gadfly speculations”, I’ve had a sneaking suspicion that this exact concept was expounded upon somewhere in The Sequences.  Sure enough, on skimming and re-skimming in preparation for writing this post, I found in this essay the notion that we should be careful to actually consider inconvenient possibilities.  Previously, the part of the essay which had stuck in my conscious mind was regarding a strategy for winning at debates, not the merits of welcoming gadflies.
  •  #167: “Taboo Your Words”: The entire section of the use of words is pure gold.  If someone is daunted by reading the full Sequences, I recommend starting with this section, as it is presented very cleanly and stands alone quite well.  I picked out this essay because it happens to be a favorite of mine, but I like them all.
  • #196: “Think Like Reality”: Some months ago, I had planned to put a short post on Tumblr as a mini-rant against how often disapproval is expressed as “I really don’t understand why…”  I was going to argue that framing it in this way impedes understanding: our goal should be to try to understand points of view we don’t agree with, which doesn’t mean we have to approve of them.  I was delighted to then come across this essay, which winds up arguing the exact same thing, but interestingly, from a much more abstract and impersonal viewpoint.
  • #202: “Joy in the Merely Real”: For a lot of my life, I’ve felt impatient with the common attitude that an atheistic, naturalistic view of the universe left no room for true beauty or joy, without knowing how to explain why.  Finally, someone was able to fully articulate and justify how I felt.
  • #227: “Excluding the Supernatural”: This one raised my hackles a bit, for the same reason that #16 did.  I’ve been known to say “Creationism can’t be science, because it invokes the supernatural”, while feeling a twinge of uneasiness at the idea that this argument does indeed play into Intelligent-Design-advocates’ hands.  I’m going to have to spend a little more time digesting this essay and thinking about it.
  • #229: “Quantam Explanations”:  This one heads a section where Yudkowsky tries to provide a crash course in quantam mechanics.  I have to admit that I didn’t read super carefully and missed a lot of the finer points, but I find the opinions laid out in the first essay to be quite intriguing.  I never had the chance to formally study quantam mechanics, and I tended to get easily confused whenever I tried to teach myself.  Maybe Yudkowsky has a better idea of how it should be presented.  My very tentative opinion (as someone who doesn’t fully understand what anyone is talking about in this area) is that Yudkowsky has the right idea, and that what he says is plausible when he suggests that it is an unfortunate accident that quantam mechanics is traditionally presented in another framework.  I certainly remember embracing the many-worlds hypothesis from a young age and have always hated the idea of an anti-deterministic interpretation — not sure if those feelings really count as bias or not.
  • #244: “The Dilemma: Science or Bayes?”:  Here we arrive at the heart of what is considered most controversial in The Sequences (the quantam mechanics explanations certainly feed into it, of course).  Everything Yudkowsky is saying here sounds like it makes a lot of sense to me, but I’ll have to do some more thinking, and it would be better to read some contradictory viewpoints as well before cementing an opinion.  That said, it’s definitely clear that Yudkowsky presented it badly, as a contest between Science and Bayes with the implication that the scientific method must be defeated by Bayesianism, rather than as a call for the scientific process to be enhanced and improved.  I recall that this was a major element of Scott Alexander’s defense of this essay from attacks by Topher Hallquist a while back.
  • #255: “Einstein’s Superpowers”: I’ve long held the view that Einstein, while an exceptionally smart individual, is a bit overrated when held up as the smartest person who ever lived, or as the very epitome of intelligence.  It was interesting to see Yudkowsky explore and eventually imply this view.
  • #291: “Newcomb’s Problem and Regret of Rationality”: I appreciated finally getting a good introduction to Newcomb’s Problem, which I was long overdue in closely considering.  Again my instinct is to lean towards Yudkowsky’s controversial opinion, but I should really read arguments for both sides and let them digest.  I hope to have my thoughts in order to write a post about the ethics of voting in time for the presidential election this November.
  • #306: “Trying to Try”: I was amused at the title of this essay because I remember using that phrase in a short play that I wrote for a theater class when I was 15.  It was one of the most deeply-felt lines I wrote for the rather mundane and light-hearted play about two schoolmates who couldn’t get anything done on a paper they were supposed to write together (“Well okay, I didn’t try, but I tried to try.”)  It was a reflection on my willpower issues, and on how I felt and still feel about the notion of acts requiring different levels of effort rather than a dichotomy of free vs. non-free decisions.  That doesn’t directly have much to do with what the actual essay is getting at, though.
  • #322: “Rationality: Common Interest of Many Causes”: This is rather relatable from my experiences with atheist/agnostic clubs.  I always felt that there was too much effort to attach skepticism to specific causes, rather than just focusing on what we had in common and creating spin-off clubs for some of those causes.

So how do I feel about the work overall?  Well, I wouldn’t exactly call it fun reading, and it feels cerebral, abstract, and impersonal in comparison to Scott Alexander’s work, but I did enjoy large parts of it and found it overall worthwhile.  Eliezer Yudkowsky has a knack for writing in a way that conveys his points very precisely but with enough flow and colorful language that it isn’t completely dry, although I feel that his abstract excursions occasionally get bogged down.

I did not come away with the feeling that The Sequences completely revolutionized the way I think about rationality, as some have.  Most of the content focused on views that I feel that I already vaguely held, or had been blindly grasping towards at some point, but which were conveyed by Yudkowsky with rigor and clarity that I could never imagine achieving.  For instance, the philosophy of Bayesianism, one of the main ongoing themes, was not entirely novel to me.  I felt that I had already long preferred to define my beliefs in terms of probabilities attached to different possible outcomes, and update these values in light of new evidence, rather than force myself into a dichotomy of either believing something or not.  But I had never connected this mindset to Bayes’ Theorem or thought the whole philosophy out systematically as Yudkowsky has.

For the moment, I consider myself to be tentatively pro-Yudkowsky on most issues (though still very agnostic when it comes to things like Friendly AI and The Singularity).  When it comes to common philosophical topics, I’m pretty sure I agree with him.  For instance, regarding the free will question, I suspect that his particular brand of compatibilism is the same viewpoint that I’ve held deeply since my college days of trying to debate this with everybody, only he seems to have approached it from a different angle and succeeded in fully solving the problem as I had always been frustrated that I couldn’t do.  I’m slightly inclined in his direction when it comes to topics that are somewhat newer for me, like quantam mechanics and Newcomb-like problems.  However, as I’ve already said, I should really examine opposing viewpoints before deciding that I’m pro-Yudkowsky on these.  Where I feel most inclined to disagree with him, as I implied in a couple of the bulletpoints above, is in his harsh treatment towards religion where he too often appears to confuse it with fundamentalism or at least to adopt the assumption that it involves more than just a belief in a theistic god.  I suspect that he has reasons for this which he tried to convey in his essays and which failed to stick with me after the first reading, so I’ll try to keep an open mind about this as well.

One common criticism of Eliezer Yudkowsky is that he writes in a way that sounds incredibly arrogant.  While I agree that in some places his tone became pretty outrageously conceited and overconfident, I don’t feel comfortable judging the man based on this tendency.  A short version of my reasoning behind not wanting to make assumptions about his actual attitude is that I believe that even some of the greatest writers only know how to write with a particular tone, just as even some of the greatest speakers only know how to speak with a particular accent.

Rationality: from AI to Zombies covered an awful lot of ideas that at some point I have pondered, groped towards, or struggled to put a finger on (as well as many that I haven’t).  For instance, as I indicated above, the central concept of my last post was already discussed in one of Yudkowsky’s essays, although I was presenting the idea from a different approach and want to turn it in still another direction in future writing.  It’s quite likely that many, if not most, of the rationalism-related concepts I (and others) find myself thinking about have been at least touched upon in this massive book.  It doesn’t really matter, though.  The ideas I write about are all unoriginal to some extent, but I still like to write about these things from my own personal angle even if this is a form of wheel-reinvention.

On the other hand, I’m pretty sure there was nothing in The Sequences treating ethical judgments of multivariate decision-making situations (though in all of Less Wrong, I have no idea), which feels to me like a crucial thing to think about if one is interested in utilitarian ethics.  This goes to show that even 1,750 pages of essays on rationality are nowhere near enough to cover all interesting topics that might come up.  Rationality: from AI to Zombies provides an immensely powerful foundation for any aspiring rationalist, but I like to believe that there are still plenty of rationalism-related conundrums to explore.

3 thoughts on “My reaction to Rationality: from AI to Zombies

  1. ‘Some months ago, I had planned to put a short post on Tumblr as a mini-rant against how often disapproval is expressed as “I really don’t understand why…” ‘

    It’s possible to see this tic in a different light: It’s a habitual attempt to be charitable to viewpoints you find unconvincing. There are certainly times where I am uncertain whether my nonacceptance of an argument comes not understanding it, or from understanding it and disagreeing. I believe there isn’t really a sharp distinction between the two. While a person may have a fairly good model of her interlocutor’s thought process, to declare this model as complete is to assert that the first divergence in the model’s thought process and hers comes from an atomic psychological difference, rather than an identifiable deeper reason that might be discussed rationally. Such an atomic difference may be real, but it is dangerous to assume.

    Like

    1. Fair point, I hadn’t considered it that way. I still feel like some of the “I really don’t understand why…” comments I witness IRL betray a lack of effort to be charitable, but it could be the opposite as well.

      Like

      1. Well, I deliberately said only that this one to see this, rather than more strongly saying that this is what’s happening. My opinion is that a great deal of the time this tendency is something between an idiomatic language usage and a cached thought, that you respond to a viewpoint you don’t empathize with by claiming not to understand it. That said, I suspect that part of the reason this became widespread is due to a genuine difficulty in distinguishing not understanding and disapproval.

        Like

Leave a comment