Joe, In this article your views on morality, in that area were we disagree, is the clearest ever. I haven't fully fleshed out my position so I'll apologize in advance for not giving the full-throated opposition that your argument deserves. (It never seems fair to tell someone, "I disagree" but then not be specific and complete as to why). I'll jump to where you described the differences between a universal or intrinsic standard. We agree that values are not, by their nature, intrinsic. But as to universal, we disagree. You said, "A universal standard is something independent of the moral agent himself." But that would be an intrinsic value because the claim that it resides as a property in an object is what means it is not bound in any way to the agent. A universal standard is one that is not independent of any moral agent. The individual right to one's life is a moral principle, and one's life is a moral value. They are both universal in that they apply to all humans as humans. They are not agent independent because every agent has the same principle and value. You said, "A morality of self-interest is agent-dependent." I take that to mean that morality, as you are describing it, is subjective. I think subjective is accurate because it isn't only agent-dependent, but also circumstance dependent. What you are saying, if I'm understanding you, is that it might be in the self-interest of a person to do X at time Y, but not necessarily in the interest of others, or of this person at some other time. This make morality totally subject to the circumstances at a given time. Because you have not abandoned an an attempt to work out an objective (and reasoned) approach to morality, it seems that what you have done is strongly reduce the number of concrete instances of moral actions that would still be univeral... or that there are none that would be universal, but that the criteria for judging what is moral and what is not is severely reduced. Here is just a beginning and tentitive approach to how I think morality should be viewed. It is a relationship-type of conceptual field. A moral value is a description of a generalized category that relates a human, as such, and a potential value given the nature of humans and reality. The very nature of a relationship is such that we are not talking about an intrinsic thing. Like human nature, it is a category that is intended to be universal and it arises out of our nature. At that point of understanding what we need to do is to shift to the concept of 'purpose.' Of what purpose is a field of knowledge such as morality? We can talk about the value of a subset of morality - individual rights - as the Ocam's razor for politics - for separating force and choice to maximize the value of the social environment. That becomes a clear and valuable purpose (difficult but valuable). But what of the wider field of morality - beyond individual rights? This is where my argument isn't yet as fleshed out as I hope it will be later. I see us doing the same thing in all of our life, that we do with politics, and in this way: We say, "We have a kind of social contract, even though no single individual alive today signed on or volunteered." In politics we are protecting an enviroment for us to act in. With the broader area of morality in general we are adopting a mental/emotional framework. We 'sign on' to the concept of morality. We "endorse" certain moral positions. AND, that gives us a place to stand, as it were. It gives us a framework. But it only works if it is universal and objective. The fellow who knowingly takes the loan from loan sharks has this choice - actually he made the choice before he made the loan - "I will sign-on to the concept of moral versus immoral, or I won't." If he does, then he takes his chances when he takes the loan. If he doesn't sign-on, then he acts on what he can call "self-interest" but in fact he has no standard - he has no morality - he chose to forego that conceptual framework and is outside of the area of values or rights - or he has been in morality when it seemed to work for him and then repudiates it when it doesn't seem to work and still ends up outside the framework. I suppose a person could decide that his morality was dog-eat-dog from the very beginning and that there is no morality that they accept that prohibits force or fraud or theft, and he could choose to rip off the loan shark, and then kill him and that is his pursuit of self-interest. The argument of the person who stands by a universal, objective morality is that one cannot logically propose a system that applies to one person but not to another and actually requires some to be victims so that others can be predators. From my perspective, the fellow that makes the loan from the loan shark has volunteered to give up some options when he makes that stupid loan. Otherwise, he is asking to have his cake and eat it too, to take the loan but then not pay it back and then not suffer the consequences. He can run, he can hide, he can try to turn state's evidence for protection, he can arm himself in hopes of successful self-defense, but he can't claim that there is such a thing as a set of moral actions, that he was acting morally, and suddenly he is now forced to kill someone, but he is still acting morally. Here is the question that a universal, objective moral code would have him ask before taking that loan. "Is it in my self-interest to put myself in a position where people will try to kill me if I'm unable to repay the loan on time?" The other issue to be faced is, "If there are no univeral moral values, then there is no morality. All is but preferences and circumstances and the very words 'moral' and 'right' and all of their derivatives, like 'justice' are meaningless. There are clearly different values for different people. What I value will differ from what other people value... but only to a degree. At some level of abstraction we have common values. Values that are a common denominator to our being humans. Those are the stuff of morality. And finally, I don't believe one can derive what is in their self-interest in all cases without a code of values and that they cannot have a code of values that doesn't start with those values common to all humans.
|