Multivariate Utilitarianism

“You’re a rotten driver,” I protested.  “Either you ought to be more careful, or you oughtn’t to drive at all.”

“I am careful.”

“No, you’re not.”

“Well, other people are,” she said lightly.

“What’s that got to do with it?”

“They’ll keep out of my way,” she insisted.  “It takes two to make an accident.”

— from The Great Gatsby, by F. Scott Fitzgerald

[Content note: This is a clumsy explanation of an idea in progress, and I hope someday to turn it into something more polished.  I expect the gist of it has been developed in a more complete form in plenty of other places.  This is the longest post I’ve written here, and I don’t arrive at the main point until near the end of section III(C).  Some math (differential calculus) with explanations that can be skimmed over by those already very familiar with it.  I deliberately kept formulas and symbolic expressions almost to a minimum, partly because I still haven’t worked out how to import LaTex expressions into WordPress.  There’s an explanation near the end which could use a simple chart or diagram — if I figure out how, I might edit one in.]

I. Multiple-agent problems

 I want to write about my ideas on how to make moral judgments in situations where multiple agents are involved.  My goal is to try to put it in a rigorous framework, but I expect that this will be only a sort of rough draft.

I’ll start with an example.

I teach math classes to university students, and there are certain types of situations that too often come up between me and them.  I’ll describe one of the most dire incidents, which happened during my first semester teaching.  For some reason, the university where I was teaching at the time put the times but not the locations of the midterm exams online.  The location of the make-up midterm exam was given on the sign-up sheet, which I would pass around on several consecutive classes for the students who knew they wouldn’t be able to make the regular exam.  When passing around the sheet in class, I was careful to point out that they needed to copy down the location of the make-up exam, because they wouldn’t be able to find it anywhere online.  Now I also, of course, give my students my email and tell them they can write to me any time if there’s any kind of problem, and that I’ll try to answer as soon as I can.  And I try to read and respond to their messages in a timely manner as promised, but oftentimes I have around 150 students total, which means pretty frequent student emails, and I sometimes don’t get to one quickly enough.

So, it’s probably not hard to see where this is going.  I had a situation where with an hour to go before the make-up exam, a student emailed me to say that he didn’t know where it was.  I wasn’t near my laptop or checking my email during that particular hour, and as a result, the student missed the make-up exam.

Whenever something like this happens, even though common sense tells me that the student is largely to blame for not being responsible and following directions in the first place, part of me feels like it’s my fault because I failed to get to their email as quickly as I could have.  As I recall, in this situation, I felt bad enough that I made a special arrangement for the student to take the exam at another time in my office, and subsequently I certainly made sure to send an email to all of my students informing them of the locations of all exams a few days in advance.  Still, at the same time, it’s not quite fair to say that the incident was really “my fault”.

When contemplating situations like these, the conclusion I usually arrive at is that we both messed up, but that at least a great deal of the blame should fall on the student rather than on me.  This conclusion seems to comply with most people’s common sense of how moral responsibility works.  However, it’s quite not so trivial to pinpoint exactly what I mean by “messed up” or to rigorously defend why the student deserves more of the blame for having missed the exam.  The difficulty lies in the fact that the student and I are two independent agents, each of whose actions (or inactions) contributed to the unfortunate result.

When I say that we both “messed up”, it’s clear enough that I mean roughly the following: each of us, being mostly unable to influence the other’s actions, did something which resulted in a worse outcome than would have occurred if that thing had not been done.  The naive judgment to make is to place blame on anyone who “messes up” in that sense — that is, anyone who does something which brings about a worse result than if they hadn’t done it.  And indeed, this method of judgment makes a lot of sense if only one person’s actions brought about a negative consequence, but it falls apart as soon as there are two or more people’s actions in the equation.  It’s nonsensical to say that two people each individually carry full moral responsibility, yet a priori there’s no obvious way to divide up the responsibility between them.  (One of these days I’ll get through an entire post without invoking a phrase like “a priori”, but that day is not today.)  Yet, it seems like many of the people who are one of multiple agents in such a situation instinctively gravitate towards focusing on the other agent having messed up and conclude that the other party should be blamed (as I mentioned in my post on free will and politics, people are quick to assume others have free will while their own actions are determined).  This is essentially what the character Jordan from The Great Gatsby does in the quote above: any car accident that she gets into won’t be her fault, because the other driver would be guilty of failing to avoid it.

Yes, it often takes two to make an accident (or more than two), which can make moral judgments a lot less clear.

II. Calculus / ethics problems of one variable

A) Taking derivatives

One of the classes I taught during graduate school was a multivariate calculus course.  When teaching it, I started off almost every single lecture by recalling a concept from single-variable calculus which I was going to generalize to a situation with several variables.  I want to describe multiple-agent ethics problems in terms of multivariate calculus, and to do so, I think I’ll follow the same strategy by first describing the way I view a single-agent situation and how this can be interpreted as a concept in single-variable calculus.

The short explanation (and yes, originally I did try to write down a much more long-winded one) of single-variable derivatives is this: if you have a function of one independent variable x, then the derivative of that function at a particular value of x, written dy/dx, is the rate of increase of the dependent variable y when x starts at that value and begins to increase.  A classic example is the function y = x^2, which takes the input number (independent variable) x and squares it to get the output (dependent variable) y.  It can be shown using basic techniques of differential calculus that the derivative of this function at, let’s say x = 3, is 6.  This means that when you set x = 3 (which means that y = 3^2 = 9), then if you start to increase x at a certain rate, the dependent variable y will start to increase at 6 times that rate.  So if you add a very small increment to x = 3, let’s say 0.1 so that x increases to the value of 3.1, then y will increase by approximately 6 times that increment, which is 0.6.  In fact, when x is 3.1, y is 3.1^2 = 9.61, and given that y started out at exactly 9, we see that y increased by 0.61, which is pretty close to our estimate of 0.6.  Meanwhile, if instead you start with x = 3 and begin to decrease x at a certain rate, then y will begin to decrease at 6 times that rate.

Perhaps the most important thing to note here is that the derivative is a positive number when x = 3, which means that starting to increase x will cause y to begin increasing, while starting to decrease x will cause y to begin decreasing.  If we instead start at a very different value of x, say x = -2, then the derivative is a negative number (one can compute that it’s exactly -4), and starting to increase x will cause y to start decreasing, while starting to decrease x will cause y to start increasing.

For a real-life example, let’s imagine a very simplistic scenario where my level of happiness is entirely dependent on the amount of time I spend each day on exercise; that is, Happiness is a function of Time Exercising.  Suppose that I work out or get some form of exercise for one particular amount of time each day, little enough that I would benefit in terms of happiness if I were to increase the amount of time I spend working out.  Here the independent variable x is the amount of time in minutes I spend exercising each day; suppose that at the moment, x = 30.  Let’s assume that if I were to increase my amount of daily exercise time, my level of happiness measured in Happiness Units (the dependent variable y) would begin to increase at a rate of 5 Happiness Units per additional minute of exercise.  Then the derivative of the function whose independent variable x measures how long I spend exercising each day and whose dependent variable y measures how happy I am as a result of my exercise routine at x = 30, is 5.  If I go from 30 to 33 minutes of exercise per day, I calculate that my overall happiness will increase by roughly 3 x 5 = 15 Happiness Points.  (The way to write down the general formula for approximating change in y is “Δy ≈ dy/dx • Δx“.)

Again, the most important thing about this derivative from a practical perspective is that it’s positive: starting to increase the amount of exercise I do will make me happier, while starting to decrease it will make me less happy.  Of course, the rate of increase in Happiness Units per additional minute of exercise itself will change the more exercise I add to my routine, probably becoming less and less (a “decreasing rate of returns”), eventually reaching a point where I’m not benefiting emotionally at all by increasing my work-out time.  Beyond that, I may be at a point where I actually become more unhappy by increasing my exercise time, for instance, if my time at the gym is something ridiculous like 4 hours every day (x = 240).  But that’s not really relevant when thinking about the derivative where I am now, at x = 30, where clearly my exercise routine isn’t particularly excessive and working out more will still make me feel better.

B) Deriving ethical statements

So how does this relate to ethics?  Well, utilitarian ethics is all about making choices that maximize people’s overall well-being, or overall utility.  If we assume that my independent variable x (exercise time) is something I have complete control over, and that the main decision at hand is how to start adjusting x, and that “overall well-being” is essentially just the value of my dependent variable y (level of happiness)… well then the ethical problem of “How should I begin to adjust the time I allot for exercise?” boils down to looking at the derivative at my current x = 30.  In fact, if my choice at the moment is either “start to increase exercise” versus “start to decrease exercise”, then clearly the mere fact that my derivative is positive means that I ought to start to increase my exercise (because then my level of happiness will go up).

Of course, we could consider my level of happiness as a function of some other aspect of my lifestyle.  Say now my independent variable x measures how much TV I watch each day (in minutes), and the derivative where I’m at (suppose it’s x = 150) might be some negative number (say -8).  Well, that just tells me that I shouldn’t increase my TV time (because it would decrease my happiness), and in fact, that I ought to decrease my TV time (because that would increase my happiness).

Or for that matter, we could imagine another scenario where I’m considering x to be my exercise time once again, but now I’m working out for 240 minutes a day, and the derivative is negative, which similarly means that at that level, I ought to decrease my exercise time.

It’s pretty obvious and non-controversial how to give praise or assign blame in these one-variable situations.  My praise/blame-worthiness is proportional to the increase/decrease in utility (for simplicity, amount of utility = number of Happiness Units) that results from the change I make in x.  If the derivative for happiness-as-a-function-of-exercise-time is +5 at x = 30, and I increase x by some small (positive) increment Δx (say Δx = 3), then I deserve praise; the resulting increase in utility is roughly the derivative times Δx, which is 5 x 3 = 15.  If, on the other hand, I decrease it by 5 minutes (this is letting Δx = -5), then I deserve blame, corresponding to a decrease in utility of roughly 5 x 5 = 25.  If the derivative were 10 instead of 5, then utility would be decreased by roughly twice as much, and I would essentially be twice as blameworthy.

So if I have a positive derivative at the value of x where I am now, then I should start to move x in a positive direction (similarly for negative derivative moving in a negative direction), and my praiseworthiness in doing so is proportional to how large that derivative is.  And of course, a similar statement holds for a negative derivative and choosing to move in a positive direction, or vice versa, regarding blameworthiness.

This is all a very wordy and overly-involved way to state the obvious, and none of it should be controversial, but it helps to set up the somewhat more interesting two-variable situation I want to look at next.

TL;DR: Utilitarianism says that if utility is a function of an independent variable x which you control, then you should start to move x in the positive (or negative) direction if the derivative is positive (or negative).

III. Calculus / ethics problems of multiple variables

A) Two variables, same agent

All right, so above I was talking about situations where overall utility depends entirely on one parameter which a person has control over.  One might object that it doesn’t really make sense to imagine cases where there is only one parameter that can be moved, since in real life there are usually many conscious actions which result in a good or bad outcome.  Indeed, it would seem that the only way to make sense of such examples is to imagine that all other parameters are fixed and impossible to change.  I’ll come back to this idea later, but in any case, I want to consider a situation that better reflects the vast complexity of our actual universe by having many independent variables (okay, exactly two, which reflects vast complexity slightly better).

Suppose I now consider my overall happiness y as being dependent on both the time I spend on exercise (independent variable x, which we suppose at the moment equals 30) as well as on the time I spend watching TV (independent variable w, which we suppose at the moment equals 60).  I can adjust either of these independent variables however I choose, and I want to consider how my happiness is affected by adjusting either one.  If I consider my independent variables x and w jointly and contemplate gradually changing them, there are now many choices of how the way I can do this: for example, I can start increasing x while not changing w at all, or start to decrease w while not changing x at all, or start to increase both at the exact rate, or start to decrease w at twice the rate that I’m starting to increase x, etc.  Any such decision I make will cause my dependent variable y to begin increasing or decreasing at a certain rate.  The problem of determining the rate of change of y when I start to change x and w in a certain way is solved using something called a directional derivative, a standard concept in multivariate calculus.  (The problem of determining how to start changing x and w so as to maximize the rate of increase of y — as is surely our objective when y measures happiness — is solved by the technique of “moving in the direction of the gradient”.  But this is a needless complication and I’m going to sidestep the need to discuss it.)

B) Interlude on smoothness

This may look like I’m still vastly overthinking things: after all, shouldn’t I just independently consider the separate decisions of how to change x and how to change w, make one decision for each, and act on each of those decisions at the same time?  In some real-life situations, this would make sense.  It depends on the particular multivariate function we’re looking at.  If it is indeed the case — if the rate of change in the dependent variable y when I start to change my independent variables x and w in a certain way is determined by just separately considering what happens when I change x and what happens when I change w — then our multivariate function is said to be a smooth function.

Let’s assume for the moment that our model of happiness as a function of exercise time and TV time is a smooth function.  That would mean that all we need to know is two values: the rate of increase in my happiness when I start to increase my exercise time without changing TV time, and the rate of increase in my happiness when I start to increase my TV time without changing exercise time.  These two values are called the partial derivatives of the function with respect to x and with respect to w respectively; they are denoted ∂y/∂x and ∂y/∂w.  Let’s say that the partial derivative with respect to x is 5 (my happiness increases when I start to exercise more), and the partial derivative with respect to y is -3 (my happiness decreases when I start watching more TV).  Then if I make the decision to increase my exercise routine by some very small amount — say Δx = 1 — and at the same time also increase my TV time by a small amount — say Δw = 2 — then I can estimate the change in my happiness to be roughly

Δy ≈ ∂y/∂x • Δx + ∂y/∂• Δw = (5 x 1) + (-3 x 2) = -1.

So my happiness decreased by 1 Happiness Unit, which means that from a utilitarian perspective, I probably made a mildly bad decision.

It still seems pretty obvious and non-controversial that praise or blame should still be dealt accordingly in situations like this.  For instance, when I increase my exercise time by 1 minute and my TV time by 2 minutes (I guess I’m watching a few extra commercials or something?), by the above calculation, my happiness is decreasing by approximately 1 Happiness Unit, and my action is worthy of a small amount of blame.  If I were to, say, not change my exercise time but increase TV time by 3 minutes, then a similar calculation shows that now y decreases by 3 x 3 = 9, which means that this decision was also blameworthy and was in fact 9 times as blameworthy as the other one.

Unfortunately, both in mathematics and in reality, not all multivariate functions are smooth functions.  I can easily imagine it to be the case even with the example of happiness as a function of exercise time and TV time that the function is not smooth.  Suppose that, again, the rate of change of y when I start to increase x without changing w is 5 and the rate of change of y when I start to increase w without changing x is -3.  If we only needed to consider these partial derivatives separately in order to make our decision, it would be obvious that I ought to both increase my exercise time by some amount while decreasing my TV time by some amount.  But perhaps I like to watch TV as a much-needed way to cool off after working out, and I’m actually better off (or at least happier) if I increase both exercise time and TV time by the same amount, as opposed to exercising more while cutting down on TV.  That is, increasing x by 1 and increasing w by 2 results in y actually changing by a positive amount, not by -1 = (5 x 1) + (-3 x 2) as we predicted above.

C) Two variables, different agents

Okay, but the type of situation I started this post describing was not one where I have control over two parameters and have to determine in which directions to begin sliding them.  I was instead talking about the case of (at least) two separate agents who each have control over (at least) one parameter.  For instance, there may be two drivers, Mr. X and Ms. W, who each choose to drive at a certain speed at a particular moment (let’s call Mr. X’s speed x and Ms. W’s speed w), such that if either one of them goes just a bit faster right now, then there will be a collision which will do a lot of damage resulting in a decrease in utility (let’s again call this y).  At least naïvely, from the point of view of Mr. X, it doesn’t make sense in the heat of the moment to compute the optimal change in w as well as the optimal change in x, since he has no direct control over w.  He can only determine how to best adjust x, his own speed (the answer, by the way, is perhaps to decrease it or at least definitely not to increase it!), and apart from that all he can do is hope that Ms. W likewise acts responsibly with her speed w.

(I am of course making extreme simplifying assumptions here.  In many human interactions, it’s possible to at least indirectly influence the choices of others.  But here we are ignoring the possibility of this sort of “second-order” action where one can make a choice affecting someone else’s variable.)

So I guess all this is leading to my stating what should be fairly obvious: when two (or more) agents each control an independent variable — say the independent variables are x and w — then the one who controls x should decide how to change it based on the partial derivative with respect to x.  Again, the partial derivative with respect to x, written ∂y/∂x, is the rate at which y changes when you start to increase x but leave w the same.  If y represents utility, then our agent Mr. X should increase x if and only if ∂y/∂x is positive.  After all, he has no idea what Ms. W might do with w and can’t really do anything about it, so he should proceed with his calculations as though w is staying at its current value.

That’s what each agent should do.  I’ve said nothing about how much either of them is deserving of praise or blame in the outcome of their actions.  That’s an entirely distinct issue to consider, and a much more difficult one, at least if the function isn’t smooth.

If our function is smooth, then there is a straightforward way to apportion moral responsibility.  Since then the change in utility is roughly ∂y/∂x • Δx + ∂y/∂w • Δw, we can give one agent responsibility for the ∂y/∂x • Δx part and the other responsibility for the ∂y/∂w • Δw part and call it a day.  For instance, we can go back to our “happiness y is a function of exercise time x and TV time w” function, assuming that it’s smooth with ∂y/∂x = 5 and ∂y/∂w = -3 as before, but supposing that there are two separate agents: Mr. X, who controls my exercising, and Ms. W, who controls my TV-watching.  Then if, say, Mr. X increases x by 1 while Ms. W decreases w by 2, then Mr. X is praiseworthy for increasing my happiness by 5 x 1 = 5, while Ms. W is blameworthy for decreasing it by 3 x 2 = 6.  Our moral judgment of their actions is determined by our judgment of what each of them should have done.

But it becomes much less clear how to place moral judgment with non-smooth functions, like in the driving situation where x is the speed of the driver Mr. X, and w is the speed of the driver Ms. W, and y is overall utility which stays about the same if they avoid a collision but plunges if they do wind up hitting each other.  Since disaster could only have been averted by both of them choosing not to speed up, there’s just no canonical (obvious) way to assign blame.

Similarly, in the example from my own life that I started out with, just the fact that I should have answered my student’s e-mail, and my student should have just generally been much more responsible and organized.  Each of us controlled a function that had a partial derivative telling us to act differently than we did.  But as I said at the beginning, it seems that the student really deserves most of the blame.  It violates common sense — in fact, it’s not really even coherent — to say I must shoulder the full blame for the missed make-up exam because if I’d acted differently it wouldn’t have happened, while by a symmetric argument my student must have simultaneously earned the same blame.

D) Interlude on “is-versus-ought”

If we misunderstand beliefs about what is the best course of action for one agent in the midst of a multivariate situation to be pronouncements of moral judgment on us for whatever comes to pass, then we may find ourselves making choices based on what other agents ought to do rather than on what they are doing.  Either that, or we might find ourselves thinking like Jordan the rotten driver: we can do whatever we like, and if other agents fail to choose the best action, then any unfortunate consequences are clearly their fault and not ours.

Confusing the issue of what one ought to do when controlling only one variable in a complex situation with the issue of who deserves credit for the outcome of that situation seems to be a very, very frequent problem.  I see it as an element of many personal conflicts as well as in debates on pretty much every political issue out there.  It’s plainly present just about every time one hears something about how “so-and-so could have stopped this from happening” or any mention of “victim-blaming”.  Maybe sometime later I’ll write something that delves into one of these controversies and how this confusion is a major aspect of the bad argumentation surrounding it, but for now, I just want to stress how relevant I believe it is to many serious disputes.

There is a frequently-cited fallacy called “is-versus-ought”.  It takes many different forms, but in the context I see most often, it means that someone is objecting to claim that agent A should do thing X by pointing out that ethically speaking, agent A shouldn’t be required to do X.  This type of reasoning falls under the is-versus-ought fallacy because it seems to be confusing “A should do X” with “A ought to be required to do X”.  Perhaps we ought not to live in a world where A needs to do X, but for the time being, that’s the way our world is.  Anyway, I would point out that this is essentially a case of, or perhaps equivalent to, the widespread confusion I’ve been emphasizing.

IV. So how does one assign moral responsibility?

I hate to write such a long essay detailing how computing degrees of moral responsibility in real-life multivariate situations is more subtle than it may appear without actually proposing a solution as to how to actually determine moral responsibility.  I felt so sure that there must be a nice mathematical way to describe it, just as there was a nice mathematical way to describe which direction each agent in a multivariate situation should move in and how much.  But unfortunately, every time I’ve thought that I’d gotten the right idea and tried to write it down, it turned out either not to make coherent sense or not to really explain anything.

The most intriguing idea I’ve had is to consider not only the partial derivatives ∂y/∂x and ∂y/∂w themselves, but to consider how each partial derivative changes as the other variable starts to increase.  That is, I would be looking at the second-order partial derivatives ∂(∂y/∂x)/∂w and ∂(∂y/∂w)/∂x.  For a smooth function, these quantities are always equal by Clairaut’s Theorem, but as I’ve already established, in real life we’re often dealing with non-smooth functions.  The idea is something like, say, if by increasing x, it becomes much riskier to start increasing w (equivalently, ∂(∂y/∂w)/∂x is negative), and meanwhile w was increased, then maybe Mr. X deserves some blame for bringing about a situation for Ms. W where increasing w would more easily lead to harm.  If we switch the variables and find out that ∂(∂y/∂x)/∂w is closer to 0, then that would imply that Mr. X deserves much more of the blame.

This approach definitely appears to have some issues.  For instance, a lot of these situations are discrete (each variable can be set to either one value or another), and it’s a crude enough business just trying to estimate first-order partial derivatives, let alone second-order ones.  Oftentimes the outcomes look symmetric between the two variables.  The idea as I expressed it still isn’t clear regarding exactly what formula computes responsibility, and I’m not so sure of the best way to generalize it to three or more variables.

But still, in some examples, even discrete ones, the outcomes don’t really look symmetric and it may be reasonable to suppose that one second-order partial derivative is greater than the other.  I could almost argue this with the email example, but let me switch to a situation where it might be easier to see.  Suppose I carelessly leave my laptop alone in a public place, and somebody steals it.  Overall utility (which as usual we denote by y) is sharply decreased (at least, if we assume that it’s overall bad for an item to be stolen even though it benefits the thief).  Now this couldn’t happen without both an increase in my degree of laptop-guarding-carelessness (call it x) and an increase in the thief’s laptop-stealing behavior (call it w).  But consider this: if the would-be thief keeps their laptop-stealing behavior to a minimum, it actually increases utility for me to become more careless with laptop-guarding: it’s certainly less trouble for me if I don’t bother to keep it with me all the time.  Whereas if I keep my carelessness to a minimum, there is no change in utility due to a would-be thief deciding they want to steal it: it’s not going to get stolen either way.  It’s not unreasonable to conclude from this that ∂(∂y/∂x)/∂w < ∂(∂y/∂w)/∂x, and to link this asymmetry to the fact that the thief is more blameworthy than I am in the event that both x and w are increased.

On the other hand, I’ll have to think about it for a while longer before I can feel confident that this kind of explanation fully justifies our moral intuitions.  It might be better for now to assume that there is an explanation out there which is more virtue ethical in flavor, which says that things like stealing, or not following instructions for responsible student behavior, are just wrong, or at least wronger than leaving valuable things unguarded or failing to read emails within an hour of receiving them.

Either way, at least we know that there’s a clear-cut way to think about what we ought to do with our own variable even when there are other variables out there we can’t control.

Advertisements

3 thoughts on “Multivariate Utilitarianism

  1. I first read this a while ago, noting the similarity to my then-in-progress piece on partial narratives. This is much heavier on the math though, which made me shy away from engaging with it that time. A few graphs or charts illustrating the concepts might make it more digestible.

    The ethical partial derivative model is very elegant (and discontinuity being a big issue is an interesting find), but I have objections primarily because I have objections to utilitarianism, making me think the elegant model is barking up the wrong tree a bit.

    Above all else, it makes me think that (not that I needed convincing) combining different fields of study, like calculus and ethics, is something we need much more of.

    Like

    1. Indeed, I was concerned when writing this post that explaining partial derivatives in that much depth without any sort of diagram was going to make it hard to read, as I mentioned at the top of the post. Unfortunately, I currently lack the technical know-how to create nice-looking charts/graphs and import them. Surprisingly, I’ve never had to do this even for my mathematical papers which I’ve written using LaTex. I’d appreciate any tips as to what software you’ve been using for this, etc. But either way, I hope one day to get around to revising this post with more “pictures” added in.

      I look forward to hearing about your objections to utilitarianism in your future writing. I myself consider it the best way to think about most, but not all, ethical questions, so at the moment I can’t endorse it 100%.

      Like

      1. I just use PowerPoint for graphs, it’s far from great but I have certain obsession with aesthetic detail that works well with design work in general.

        About utilitarianism, I’ve been thinking about writing something but it’s not first on my list, really. To me it seems obvious that ethics is the study of the consequences of a grab-bag of social instincts and there is no reason to think there is anything coherent underneath it all. If one does insist on constructing a consistent system then it has very little relation to how human morality actually works – which creates problems because there is nothing outside of human moral instincts that grounds morality. I’ll stop now or I’ll go on forever.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s