Here is a video of the talk.(The audio is rather low I'm afraid, but should be okay for headphones.)
The thesis was that morality is a set of instincts, customs, preferences and behaviours unified by their common function of facilitating for-the-good-of-the-group, cooperative behaviour. The morals themselves are best revealed by empirical techniques including ethnographic survey, Curry claimed, and the functions of the morals are best elucidated using game theory. When we survey moral attitudes and behaviours, we will see that we tend to consider as moral only those acts which are best for the group.
"Morality turns out to be a collection of biological and cultural solutions to the problems of cooperation and conflict recurrent in human social life." For example, we consider it moral to respect property rights, as predicted by the game theoretic solution to conflict over resources: deference to prior ownership (the endowment effect). For another, we consider humility to be morally praiseworthy in some circumstances, bravery in others. Game theory predicts that humility is optimal when faced with hawks in hawk-dove games, but bravery better when faced with doves.
Curry deliberately took no position on what the function of cooperation is.....he didn't want to tie it to fitness or to utility, instead insisting that this isn't a fair question, any more than asking 'what is the point of goodness?' We just think its good. Curry resisted speculation about empirical questions regarding whether particular preferences are best understand as having evolved by cultural or by genetic selection, and rejected the idea of considering group selection as an alternative to selection on genes.
His attitude towards philosophers who would complain that his story lacks any true normativity? That his account contains all the normativity there is, and to look for more is a fool's errand. People have a subjective preference for particular states of the world, and that's all there is to it. I suggested that Curry might want to claim that facts about what is good are exhausted by facts about what people believe is good (and that, furthermore, our beliefs about what is good are imperfectly determined by facts about what best serves the common good) but he resisted bringing in reference to facts.
Curry makes a good distinction between mechanisms that motivate prosocial behaviour and mechanisms that assist us in evaluating the social behaviour of others. He also had good responses to a bunch of counter examples. For example, moral theorists describe 'duties to oneself' such as a duty not to commit suicide, or to maintain good health. These are explicitly non-social. Yet Curry suggests that they might reasonably be interpreted as requirements to be the sort of person that is capable of being useful to society. Concerns for non-human animals, on the other hand, might be spandrels - maladaptations, or misfirings of our empathy for other humans. On the other hand, some rules of cooperation don't seem moral at all - driving on the left hand side, for example. Yet Curry cast these as 'sins of omission', cases where obedience to a rule might not feel moral, but departure - deliberately driving on the wrong side - certainly would do. Curry suggested that our moral preferences are flexible, because they adapt when interaction possibilities change, or when new technologies come along that facilitate cooperation. Our reactions to test tube babies, for example, and maybe to organ markets, in the future, are initially resistant to evolutionary novelties but can gradually shift.
Ultimately, Curry's position is relativistic, in that it will prescribe different behaviours to different individuals depending on the particular cooperation problems they face. Morals vary, because people vary with respect to who their usual cooperation partners are and what sorts of games best describe the interactions they normally engage in. There can be no prescription, from Curry's theory, about what sort of games to engage in, given a choice, or which individuals to cooperate with. Moral behaviour for a Nazi consists in cooperating with other Nazis, for example. Moral theorists tend to think, on the other hand, that the facts about what is good are independent from facts about what people think is good.
When pressed on this, Curry suggested that we could arrive at a more satisfying prescription for the Nazi by pointing out that the public good would be increased with respect to an even larger social group if the Nazi looked beyond his Nazi party interaction partners to the world more generally. Then the more moral, because more cooperative behaviour, would be cooperation with the larger group, and not with the Nazi party.
I see two ways to understand this move.
Firstly, we can read the claim about the Nazi's obligation to cooperate with the larger group as representing Curry's own preferences, in line with a relativist reading. In other words, nothing is implied about the objective goodness or badness of Nazi activities. An agent's moral preferences are determined (albeit imperfectly) by the sorts of social interactions they are typically exposed to, together with game theoretic heuristics about how to make those interactions maximally cooperative. Thus, because Curry and the hypothetical Nazi are exposed to different social interactions they exhibit different moral preferences. Moral facts, on this reading, are internal to particular games and their players.
The second reading would escape the practical inefficacy of the first by assuming that lurking in the background of Curry's account is a general maxim about the good being equivalent with cooperation with the largest number of interaction partners.
But why should cooperation be a good thing anyway? Curry seemed at times to imply that cooperation is just obviously in a person's interests, which goes against one of the central theses of the evolution of cooperation literature - that group ends are achieved by compromising the interests of the group members in some way.
Even so, I worry that in practise it would be impossible to evaluate which of any set of possible actions is the more cooperative. There are simply too many different games one might possibly engage in at any instant in time, and too many different people might cooperate with. Furthermore, cooperation doesn't tend to aggregate - in other words, many acts of cooperation with respect to person A are acts of non-cooperation with peoples B, C, and so on. For example, I buy a chocolate bar and decide to pay instead of stealing it. But why didn't I buy a cereal bar, or a fizzy drink instead? Why did I buy it from the petrol station and not the news agent?
In any case, supposing we can insert some sort of maxim that will allow Curry's account to generate normative, as well as descriptive, claims, so that an agent could use it to solve moral dilemmas. It remains the case that in game theory there are frequently multiple equilibria - multiple solutions to a problem, so that there will remain insoluble moral dilemmas for agents - though perhaps the right thing to say here is that sometimes there are many moral possibilities. In the end I suspect that most moral dilemmas concern situations in which there are several conflicting agents or groups with which the actor might cooperate, or in which a game has different solutions depending on whether it is solved for the human agent, the agents' genes, or any of a number of more inclusive groups. The overarching problem of human social life is not should I cooperate, but who should I cooperate with?
I suspect some of our cognitive biases have evolved by genetic selection to make us more likely to engage in behaviours that enhance the fitness of our genes via participation in particular sorts of group interactions. *But* the group in question is likely to be significantly smaller than the universalist claims endorsed by Rawls or by the Golden Rule. Its also a fair bet that we have developed several cultural mechanisms designed to make cooperative interactions safer and easier within particular groups. Sometimes the groups picked out in each case will coincide, but sometimes they will not. The result is a patchwork of conflicting and overlapping impulses towards and against investment in other people. The challenge for moral theorists is to come up with a framework that can help us to escape the dilemmas we face when these pulls conflict, and there is nothing within Curry's account, that I can see, that could help us to solve such dilemmas. Fortunately, I don't think the resolution of such dilemmas is really Curry's primary concern.
.
Dr Curry made some good points in a follow-up conversation which I thought I'd add here.
ReplyDeleteCurry: A consequentialist would say that, when faced with conflicting claims, try to work out what would maximise the greatest happiness of the greatest number, and do that. I would say something very similar ‘try to work out what would maximise the common good, and do that’. Does neither account help solve such dilemmas?
Clarke: Do you want to suggest that your underlying currency for evaluating cooperation is happiness?
On your suggestion ‘try to work out what would maximise the common good, and do that'
My worry is that moral dilemmas are typically situations in which there is no obvious answer as to what would maximise the common good, or to who should be included within its remit. Utilitarianism at least has no ambiguity with respect to the latter.
But I guess my more pressing question, assuming you do want to identify the good with cooperation with the largest number (if that's a fair reading of 'the common good') is why? Why should our instincts towards helping strangers overrule our instincts towards prioritising our kin?
I really enjoyed reading Avi Tuschman's book, 'our political nature' in which he talks about how some people are disposed to extend their moral sympathies more widely than others. He lines up the differences with left and right, in the political sense. Those on the right prefer to interact in smaller social groups. Furthermore, i can imagine someone of this disposition arguing that it is better for everyone - better for the common good - if everyone sticks to smaller interaction groups. In this case we might not think that the common good is maximised by having everyone maximise the range of their cooperation.
But maybe I'm confused about what you mean by the common good?
Curry: Yes, I think the underlying currency is something like happiness, or wellbeing or utility (Buss 2000). Like 'health', it is difficult to define precisely, but we can see that some states of the world are preferable to others. (I'm using 'common good' as a colloquial term for 'a superior equilibirum in a nonzerosum game.)
Yes, dilemmas usually have no obvious answer, that's what makes them dilemmas. We might need to undertake research into the different options and their consequences; or we might have to say we don't know and/or are indifferent. Generally speaking, I don't think quandary ethics is a good way to make progress understanding morality. Hard cases make bad law.
And yes, in many cases, cooperating on a larger scale is preferable to cooperating on a smaller scale (and that is basically what's happened over the course of civilisation). But sometimes this will come at too great a cost (to ourselves or our families), or cooperating on a large scale will be unstable / unsustainable, and so we're better off cooperating in smaller more manageable groups.