[Content note: uncharacteristically short and sweet.]
The object of this very short essay is to concisely state a proposition and brief argument which I refer to frequently but was lacking a suitable post to link to. This is one of the central points of my longest essay, “Multivariate Utilitarianism“, but it’s buried most of the way down, and it seems less than ideal to link to “Multivariate Utilitarianism” each time I want to make an off-hand allusion to the idea.
Here is how I would briefly summarize it, using the template of a mathematical paper (even though the content won’t be at all rigorous, I’m afraid).
Proposition. The fact that an agent X acts in a way that results in some event A which increases/decreases utility does not imply that X bears the moral responsibility attached to this change in utility. In other words, agency does not imply moral responsibility.
Proof (sketch). One way to see that agency cannot imply moral responsibility in a situation where multiple agents are involved is through the following simple argument by contradiction. Suppose there are at least two agents X and Y whose actions bring about some event that creates some change in utility. If X had acted otherwise, then this change in utility wouldn’t have happened, so if we assume that agency implies moral responsibility, then X bears responsibility (credit or blame) proportional to the change in utility. By symmetry, we see that Y also bears the same responsibility. But both cannot be fully responsible for the same change in utility — or at least, that seems absurd.
One naïve approach to remedy this would be to divide the moral responsibility equally between all agents involved. However, working with actual examples shows that this quickly breaks down into another absurd situation, mainly because the roles of all parties creating an event are not all equally significant. We are forced to conclude that there is no canonical algorithm for assigning moral responsibility to each agent, which in particular implies the statement of the proposition.
Remark. (a) The above argument seems quite obvious (at least when stated in more everyday language) but is often obscured by the fact that in situations with multiple agents, usually only one agent is being discussed at a particular time. That is, people say “If X had acted differently, A wouldn’t have happened; therefore, X bears moral responsibility for A” without every mentioning Y.
(b) A lot of “is versus ought” type questions boil down to special cases of this concept. To state “circumstances are this way, so one should do A” is not to state “circumstances should be this way, so one should have to do A”.
Example. Here I quote a scenario I laid out in my longer post:
[There are] two drivers, Mr. X and Ms. W, who each choose to drive at a certain speed at a particular moment (let’s call Mr. X’s speed x and Ms. W’s speed w), such that if either one of them goes just a bit faster right now, then there will be a collision which will do a lot of damage resulting in a decrease in utility (let’s again call this y). At least naïvely, from the point of view of Mr. X, it doesn’t make sense in the heat of the moment to compute the optimal change in w as well as the optimal change in x, since he has no direct control over w. He can only determine how to best adjust x, his own speed (the answer, by the way, is perhaps to decrease it or at least definitely not to increase it!), and apart from that all he can do is hope that Ms. W likewise acts responsibly with her speed w… If y represents utility, then our agent Mr. X should increase x if and only if ∂y/∂x is positive. After all, he has no idea what Ms. W might do with w and can’t really do anything about it, so he should proceed with his calculations as though w is staying at its current value.
That’s what each agent should do. I’ve said nothing about how much either of them is deserving of praise or blame in the outcome of their actions.
The proposition states that in fact without knowing further details about exactly what the two drivers did, we have no information on how blameworthy Mr. X is for the accident.
To state it (or perhaps overstate it) bluntly, I cite this “agency ≠> responsibility” proposition in an attempt to remedy what I believe is a ubiquitous fallacy at the bottom of many if not most misunderstandings. I wish everyone in the Hawks and Handsaws audience a Happy New Year and look forward to writing more here in 2017!