Kantian Collectivism
Considering Collective Agency in Kant's Ethics

Written on May 8, 2018

A few years ago, I did an independent study in Kantian practical philosophy (ethics). I wrote a term paper on what happens to Kantian ethics when you consider collective groups of people as moral agents, having moral responsibilities as well as rights.

I’m going to go into the context surrounding the paper here, and some of my thoughts on Kant, moral systems in general, and collectivism vs individualism in particular.

The paper itself is too much for me to summarize here, so you should read Considering Collective Agency in Kant’s Ethics if you’re interested.

Kant

Since I was first exposed to Kantian moral philosophy in senior year of high school, I’ve been taken with it. Somehow his philosophy made sense to me in a way others didn’t.

In part, this is because Kant had a very Jewish philosophy. He wasn’t Jewish himself, but in a lot of ways his philosophy jives with what I learned growing up in a Jewish household.

  • For one, the focus on morality being based in duty. The answer to “why should I do X?” isn’t “because it will help people”, or “because it will make your life better”. Although those are often a part of the answer, ultimately we should do these things because those are the things we ought to do. There’s nothing further to it. Kant similarly rejects “because it will help people” as a basis for morality, since one might then ask: what if I don’t want to help people?

  • Another Jewish tenet that Kant subscribes to is the idea that we have duty to ourselves. We shouldn’t commit suicide, or harm ourselves for example. I rarely find this a part of moral philosophies, but for Kant it is obvious.

Kantian moral philosophy also places maxims, or principles of action, as the subject of moral inquiry. That means in essence that your decisions (and the principles by which you make those decisions) are being judged, not the outcome of your actions, or you as a person. Kant famously said “Do what is right, though the world may perish”. While I don’t really believe this in full, I am much more sympathetic to a philosophical system that judges your decisions rather than their consequences, because you can only directly control your decisions. A moral system should tell me how to act morally, and if it judges things I don’t have control over, how can I be sure to act morally?

The logical part of his philosophy also appeals to me. It is based on the idea that morality can be derived from purely logical considerations, where inconsistencies are what produce immorality.

The second formulation of the categorical imperative (the central thesis of Kantian morality) is that you should treat people* as ends in themselves, not as mere means. Essentially, treat people’s agency and well being as the goal of your actions, don’t treat people as pawns to get what you want. For me, interpersonal interactions are the bedrock of moral life. I understand that broader societal issues are at play, but in my intuitive moral system, how you treat other people is of utmost importance. So Kant’s focus on treating others’ well and respecting their autonomy jives well with how I think.
* Really, any moral agent. But I get into that more in the paper.

I’m a big fan of Kant, and I’m a big fan of taking philosophical systems and twisting them in ways they didn’t anticipate. At some point I’ll do another paper/post on why I think Kant is an anarchist philosopher.
If you ignore all of the blatantly statist things he says.

Moral Systems

It seems to me that different moral frameworks are good for different situations. I would say that they apply at different limits. It’s a little bit like how Newtonian mechanics applies to relatively large objects (compared to atoms) moving at relatively slow speeds (compared to light).

Similarly, utilitarianism applies at the large scale limit.

Consider the classic trolley problem: there is a train headed towards five people who can’t get out of the way, and you have to choose whether to divert the train onto a track with only one person who also can’t get out of the way.

The typical deontological (Kantian) answer is you shouldn’t divert the train because that would be killing someone and you shouldn’t kill people.
I actually think that this is a poor understanding of deontology, but I won’t go into that here.

The typical utilitarian answer is that you should divert the train because that would be killing fewer people, and is therefore a better outcome, and is therefore the right choice.

I’m going to propose a radically different answer: It doesn’t matter. Neither decision is right or wrong. If you look at the tracks, and want to save those five people, and so you pull the trigger, you’ve done the right thing. If you look at the tracks, and can’t bear to kill the person on the upper track, you’ve also done the right thing. It depends mostly on what you’re focusing on.

But lets get back to the original point: the large scale. Lets say that rather than one person versus five, we have one person on one track, and half the population of the world on the other. You kill the one person. There is no question what the moral decision is. It doesn’t even matter if that one person is your mother.* This is why I say that utilitarianism works well at the large scale limit. If you have a vast number of lives at stake, what matters is their wellbeing. If it’s only a few, utilitarianism seems to work less well.
* A common variation on the trolley problem where it turns out (unsurprisingly) that most people would rather save their mother at the cost of several other lives.

Of course, none of this is based in a philosophical framework of what morality means, nor is it particularly well fleshed out. I should probably develop these ideas more fully and put them in their own post sometime.

Collectivism vs Individualism

I tend to struggle a lot with the conflict between collectivism and individualism in particular.

  • Collectivism is the principle of giving the group priority over each individual in it, and often elevates the status of cultures and societies.
  • Individualism is the opposite. It emphasizes the individual, often to the point of denying the reality of collective structures in favor of a reductionist view of everything as composed of merely individuals.

Personally, I think there’s intrinsic value to cultures, and they shouldn’t simply be stamped out in an attempt to improve the well being and freedom of the individual. Obviously different situations require different interventions, but I reject the idea that people from different cultures all ultimately want (or should want) the same things and we should be homogenizing the world.

I also think it’s possible to consider the good of the whole collective independently from the good of the individuals that make it up. Perhaps one can always reduce it down to the long-run well-being of some set of individuals, but doing so just obscures what’s really going on. It may provide a simple language to talk about talk about things, but it provides us no guidance as to what to say! The language is totally mis-matched to the task of describing morality.

However, as I mentioned before, my moral conscience is often based on how one interacts with others. Treating other people well on an interpersonal level is hugely important to my sense of morality, and that naturally lends itself to an individualist worldview.

So it intrigued me to try to reconcile these two worldviews by bringing them into one moral framework.

Future Work

I think it would be fascinating to apply a similar analysis (to the one in my paper) to other moral philosophies (Utilitarianism, Virtue Ethics, etc). Perhaps a general account of morality with collective agents could be made. There might even be other basic moral philosophies that weren’t possible without considering things in this way. I think this is a promising way to bridge the usually diametrically opposed individualist and collectivist ethical frameworks.


Site proudly generated by Hakyll