Agency does not imply moral responsibility [the brief version]

[Content note: uncharacteristically short and sweet.]

The object of this very short essay is to concisely state a proposition and brief argument which I refer to frequently but was lacking a suitable post to link to.  This is one of the central points of my longest essay, “Multivariate Utilitarianism“, but it’s buried most of the way down, and it seems less than ideal to link to “Multivariate Utilitarianism” each time I want to make an off-hand allusion to the idea.

Here is how I would briefly summarize it, using the template of a mathematical paper (even though the content won’t be at all rigorous, I’m afraid).

Proposition. The fact that an agent X acts in a way that results in some event A which increases/decreases utility does not imply that X bears the moral responsibility attached to this change in utility.  In other words, agency does not imply moral responsibility.

Proof (sketch). One way to see that agency cannot imply moral responsibility in a situation where multiple agents are involved is through the following simple argument by contradiction.  Suppose there are at least two agents X and Y whose actions bring about some event that creates some change in utility.  If X had acted otherwise, then this change in utility wouldn’t have happened, so if we assume that agency implies moral responsibility, then X bears responsibility (credit or blame) proportional to the change in utility.  By symmetry, we see that Y also bears the same responsibility.  But both cannot be fully responsible for the same change in utility — or at least, that seems absurd.
One naïve approach to remedy this would be to divide the moral responsibility equally between all agents involved.  However, working with actual examples shows that this quickly breaks down into another absurd situation, mainly because the roles of all parties creating an event are not all equally significant.  We are forced to conclude that there is no canonical algorithm for assigning moral responsibility to each agent, which in particular implies the statement of the proposition.

Remark. (a) The above argument seems quite obvious (at least when stated in more everyday language) but is often obscured by the fact that in situations with multiple agents, usually only one agent is being discussed at a particular time.  That is, people say “If X had acted differently, A wouldn’t have happened; therefore, X bears moral responsibility for A” without every mentioning Y.
(b) A lot of “is versus ought” type questions boil down to special cases of this concept.  To state “circumstances are this way, so one should do A” is not to state “circumstances should be this way, so one should have to do A”.

Example.  Here I quote a scenario I laid out in my longer post:

[There are] two drivers, Mr. X and Ms. W, who each choose to drive at a certain speed at a particular moment (let’s call Mr. X’s speed x and Ms. W’s speed w), such that if either one of them goes just a bit faster right now, then there will be a collision which will do a lot of damage resulting in a decrease in utility (let’s again call this y).  At least naïvely, from the point of view of Mr. X, it doesn’t make sense in the heat of the moment to compute the optimal change in w as well as the optimal change in x, since he has no direct control over w.  He can only determine how to best adjust x, his own speed (the answer, by the way, is perhaps to decrease it or at least definitely not to increase it!), and apart from that all he can do is hope that Ms. W likewise acts responsibly with her speed w… If y represents utility, then our agent Mr. X should increase x if and only if ∂y/∂x is positive.  After all, he has no idea what Ms. W might do with w and can’t really do anything about it, so he should proceed with his calculations as though w is staying at its current value.

That’s what each agent should do.  I’ve said nothing about how much either of them is deserving of praise or blame in the outcome of their actions.

The proposition states that in fact without knowing further details about exactly what the two drivers did, we have no information on how blameworthy Mr. X is for the accident.


 

To state it (or perhaps overstate it) bluntly, I cite this “agency ≠> responsibility” proposition in an attempt to remedy what I believe is a ubiquitous fallacy at the bottom of many if not most misunderstandings.  I wish everyone in the Hawks and Handsaws audience a Happy New Year and look forward to writing more here in 2017!

new-year-s-eve-party-horn-free-gifs-free-animations-clipart-borders-vel7qn-clipart

3 thoughts on “Agency does not imply moral responsibility [the brief version]

  1. I kind of find it hard to understand why you’d think there’d be a canonical algorithm to determine moral responsibility? Maybe it’s just me but it seems to be adressing an argument nobody makes. From your earlier article I guess you’d want to formalize responsibility attribution in order to have a correct choice but finding it difficult?

    I’m looking for nails with my hammer now, but these dicussions about responsiblity seems like textbook cases of conflicting partial narratives to me and solving this version might require solving the whole generalized problem. People pick their favourite based on many factors, social and psychological. Looking for potential rigor in it is a tall order.

    Like

    1. Maybe being a research mathematician, I just can’t shake off the habit of wanting to seek a canonical algorithm for determining everything, even while in some sense that might not be how real life works. But it feels reasonable to me to imagine that there should be a canonical way to describe moral responsibility in multi-agent situations (maybe “definition” is a better term than “algorithm” here), given that there’s a canonical way to do so in single-agent situations (essentially the main axiom of utilitarianism, which I realize you’re skeptical of in the first place). However, I’m still a long way from settling on an answer, and maybe an answer of the form I imagined doesn’t exist in the first place.

      Anyway, in this post I was making a negative claim that a certain algorithm is not valid, which seems pretty safe at the moment. I’m not accusing others of explicitly claiming that it is valid either (if they were, they would probably sound obviously ridiculous), but I believe that people frequently argue by implicitly and subconsciously assuming it.

      Liked by 1 person

      1. Right, I realize that from within utilitarianism there may very well be consistent models for assigning responsibility – I just worry that producing an elegant solution requires formulating the problem in such a way that it’s quite different from what most people consiuder morality to be. Then it risks being irrelevant, I see the same problem with, say, Kantian deontology: he was so concerned about making ethics an exercise in rationality that he came up with something entirely divorced from most humans’ sense of morality.

        Maybe my scepticism against ethical theories in general makes my objections irrelevant to your project, though, so take it with a grain of salt. 🙂

        Like

Leave a comment