| | Daniel,
However, I disagree about your conclusion that your item #4 is necessarily distinct from #3. Much like how Jay put it, it's a given that, with present-day technology, it's effectively impossible for any single individual to survive, let alone have a comfortable, enjoyable life, and thus it's in everyone's long-term rational self-interest to have a working society around them which provides them various opportunities for personal advancement ... But you're saying that egoism (item #4) isn't distinct from utilitarianism (item #3). There are reams of books written about this distinction, however. You can't just collapse these 2 concepts into some murky conglomerate with a few keystrokes. When you say "it's in everyone's long-term rational self-interest to have a working society around them" then you switch the view off of the moral agent and onto the collective. This is how Holocausts have occurred.
The proper thing, morally, to do is to never leave the originating view of the individual moral agent -- to never include the 'herd' when talking about the good. There are 2 reasons for this. One is experiential and it is that you get tripped up. For instance, to make good use of the Hitler analogy, the extermination (or the scientific experimentation on) the gypsies and the physically-imperfect was good for the herd. Thinning out the 'weaker' ones allows the 'herd' to travel at a faster pace. This utilitarian thinking justifies the mass-murder that occurred.
But mass-murder isn't moral. So, via Modus Tollens, I can show that utilitarianism isn't moral (or morally good).
Morality is about life and value, and massacre is against life and value (so moralities justifying massacres are wrong). Utilitarianism justifies massacres. **************** Therefore, utilitarianism is wrong.
[or (edit) more formally]:
The moral proposition "p" (utilitarianism) implies "q" (mass murder) as a moral proposition. But "q" is not moral (or "true" -- as a moral proposition). ************** Therefore, "p" is not moral (or "true" -- as a moral proposition).
[end edit]
And the other reason is more philosophical, it's because the valuing agent isn't just the first cause of morality, but the final end of it, too. I think that you would agree that, for morality to exist, there would have to be a valuing agent capable of choosing alternative courses of action in its life. The next philosophical step is to remember that morality is a natural, individual need, and not something merely wanted. Understanding morality like you understand food intake (as essential for human life), you would understand how personal it is.
As we cannot digest food in a 'collective stomach' -- so, too, we cannot successfully use morality collectively. When utilitarians use morality collectively, they use it as a floating abstraction -- tossing out the base of morality, the individual (and her life choices), and looking toward the herd with Utopian zeal. Morality, for utilitarians, is a stolen concept. Instead of viewing it correctly as a need, they view it incorrectly as a want -- and they end up wanting Utopia and they try to prostitute morality in order to achieve it.
Ed
(Edited by Ed Thompson on 12/13, 9:03am)
|
|