About
Content
Store
Forum

Rebirth of Reason
War
People
Archives
Objectivism

Post to this threadMark all messages in this thread as readMark all messages in this thread as unreadPage 0Page 1Forward one pageLast Page


Sanction: 6, No Sanction: 0
Sanction: 6, No Sanction: 0
Post 0

Sunday, July 7, 2013 - 10:10amSanction this postReply
Bookmark
Link
Edit
Objectivists accuse Popperians of being skeptics. Popperians accuse Objectivists of being infallibilists. Actually, both philosophies are valuable and largely compatible. I present here some integrating ideas and then a mistake that both philosophies share.

Knowledge is contextual, absolute, certain, conclusive and progressive. The standard of knowledge is conclusiveness not infallibility, perfection or omniscience.

Certain means we should act on it instead of hesitating. We should follow its implications and use it, rather than sitting around doubting, wondering, scared it might be wrong. Certain also means that it is knowledge, as opposed to non-knowledge; it denies skepticism.

Absolute means no contradictions, compromises or exceptions are allowed.

Contextual means that knowledge must be considered in context. A good idea in one context may not be a good idea when transplanted into another context. No knowledge could hold up against arbitrary context switches and context dropping.

Further, knowledge is problem oriented. Knowledge needs some problem(s) or question(s) for context, which it addresses or solves. Knowledge has to be knowledge about something, with some purpose. This implies: if you have an answer to a question, and then in the future you learn more, the old answer still answers the old question. It's still knowledge in its original, intended context.

Consider blood types. People wanted to know which blood transfusions were safe (among other questions) and they created some knowledge of A, B, AB and O blood types. Later they found out more. Actually there is A+, A-, B+, B-, AB+, AB-, O+ and O-. It was proper to act on the earlier knowledge in its context. It would not be proper to act on it today; now we know that some B type blood is incompatible with some other B type blood. Today's superior knowledge of blood types is also contextual. Maybe there will be a new medical breakthrough next year. But it's still knowledge in today's context, and it's proper to act on it.

One thing to learn here is that a false idea can be knowledge. The idea that all B type blood is compatible is contextual knowledge. It was always false, as a matter of fact, and the mistake got some people killed. Yet it was still knowledge. How can that be?

Perfection is not the standard of knowledge. And not all false ideas are equally good. What matters is the early idea about blood types had value, it had useful information, it helped make many correct decisions, and no better idea was available at the time. That value never goes away even when we learn about a mistake. That original value is still knowledge, considered contextually, even though the idea as a whole is now known to be false.

Conclusive means the current context only allows for one rational conclusion. This conclusion is not infallible, but it's the only reasonable option available. All the alternative ideas have known flaws; they are refuted. There's only one idea left which is not refuted, which could be true, is true as far as we know (no known flaws), and which we should therefore accept. And that is knowledge.

None of this contradicts the progressive character of knowledge. Our knowledge is not frozen and final. We can learn more and better – without limit. We can keep identifying and correcting errors in our ideas and thereby achieve better and better knowledge. (One way knowledge can be better is that it is correct in more contexts and successfully addresses more problems and questions.)

The Mistake

Peikoff says that certainty (meaning conclusive knowledge) is when you get to the point that nothing else is possible. He means that, in the current context, there are no other options. There's just one option, and we should accept it. All the other ideas have something wrong with them, they can't be accepted. This is fine.

Peikoff also says that before you have certainty you have a different situation where there are multiple competing ideas. Fine. And that's not certainty, that's not conclusive knowledge, it's a precursor stage where you're considering the ideas. Fine.

But then Peikoff makes what I think is an important mistake. He says that if you don't have knowledge or certainty, you can still judge by the weight of the evidence. This is a standard view held by many non-Objectivists too. I think this is too compromising. I think the choices are knowledge or irrationality. We need knowledge; nothing less will suffice.

The weight of the evidence is no good. Either you have knowledge or you don't. If it's not knowledge, it's not worth anything. You need to come up with a good idea – no compromises, no contradictions, no known problems – and use that. If you can't or won't do that, all you have left is the irrationality of acting on and believing arbitrary non-knowledge.

I think we can always act on knowledge without contradictions. Knowledge is always possible to man. Not all knowledge instantly, but enough knowledge to act, in time to act. We may not know everything – but we don't need to. We can always know enough to continue life rationally. Living and acting by reason and knowledge is always possible.

(How can we always do this? That will be the subject of another essay. I'm not including any summary or hints because I think it's too confusing and misleading without a full explanation.)

Knowledge doesn't allow contradictions. Suppose you're considering two ideas that contradict each other. And you don't have a conclusive answer, you don't have knowledge of which is right. Then using or believing either one is irrational. No "weight of the evidence" or anything else can change this.

Don't pick a side when you know there is a contradiction but have not rationally resolved it. Resolve it; create knowledge; learn; think; figure it out. Neither idea being considered is good enough to address the contradiction or refute the other idea – so you know they are both flawed. Don't hope or pray that acting on a known-to-be-flawed idea will work out anyway. Irrationality doesn't work.

That's not good enough. If you discover a contradiction, you should resolve it rationally. If you fail at that – fail at the use of reason – then that's bad, that's a disaster, that's not OK.

Karl Popper made the same mistake in a different form. He said that we critically analyze competing ideas and the one that best survives criticism should be acted on. Again this is too compromising. Either exactly one idea survives criticism, or else there is still a contradiction. "Best survives criticism", and "weight of the evidence", are irrational ways of arbitrarily elevating one flawed idea over another, instead of using reason to come up with a correct idea.

Post 1

Sunday, July 7, 2013 - 11:20amSanction this postReply
Bookmark
Link
Edit
Elliot,

Thank you for such a well written post.

Here the comments that come to mind on the first half of your post:
Knowledge is contextual, absolute, certain, conclusive and progressive. ... Certain means we should act on it instead of hesitating. We should follow its implications and use it, rather than sitting around doubting, wondering, scared it might be wrong. Certain also means that it is knowledge, as opposed to non-knowledge; it denies skepticism.
Here you acknowledge that there is psychological certainty that is different, but related to epistemological certainty. It still leaves the question of how one derives epistemological certainty and how it differs from psychological certainty which anything but "absolute". Our psychological certainty can be completely missing when it should be strong, or it can be strong when it should be zero. And, it isn't binary. I'm not sure how Rand or Peikoff deal with epistemological certainty.
---------------
Absolute means no contradictions, compromises or exceptions are allowed.
This is a bit of an academic standard for knowledge. We can define 'knowledge' in this fashion, and I have no objections to that. But in practice we have to act, day in and day out, on 'knowledge' that is incomplete, unchecked, flawed to a minor degree, etc. This relates to what Fred mentioned in regards to risk. We have to establish our degree of certainty for various bits of knowledge that we chain together to make a series of actions out of, make comparisons of the 'weakest link's' certaintly level relative to the measure of the risk... and so forth. I suspect that getting an agreement between an 'academic' standard and our real world practice may be contained in the proper understanding of context. Measurement of a margin for error is in fact the setting of a limiting context.
----------------
Contextual means that knowledge must be considered in context. A good idea in one context may not be a good idea when transplanted into another context. No knowledge could hold up against arbitrary context switches and context dropping.
Couldn't agree more.
----------------
...knowledge is problem oriented.
This is true in terms of the purpose of knowledge and our relationship to knowledge for the most part. But sometimes knowledge is acquired by accident, knowledge that doesn't address a current problem or question. Someone is researching X and stumbles across a relationship between entities that is unrelated to any problem or question he, or anyone in his field, is currently addressing. This happens. That discovered relationship is knowledge. One could define knowledge such that it must be problem oriented, but I'd say that was an artificial restriction because that would make 'knowledge' as so defined, a subcategory of some larger class of mental understandings true to the facts of reality.

Further, there is a statement you made that seems to restrict or redefine "context" - "It's still knowledge in its original, intended context." Context is to be defined, or discovered and will be true or false in that it will have consistency with the 'knowledge'. If I accidentally discover a relationship between the size and the color of river rocks, there is a lot of context here, whether it is mentioned, or even known by me at the time. And that context is, in this case, independent of any question or problem. The context to be discovered will arise out of the causes - the identity - of the entities.
----------------
One thing to learn here is that a false idea can be knowledge. The idea that all B type blood is compatible is contextual knowledge. It was always false, as a matter of fact, and the mistake got some people killed. Yet it was still knowledge.
I disagree. To say that ALL B Type blood is compatible is NOT knowledge. It is not just false, but it contains a failure that relates to context. The context in which that statement is true, and hence knowledge is that all Type B is compatible as long as it also matches the + and - factors. That was a context waiting to be discovered. Until that happened it was not ABSOLUTE ('no exceptions allowed'). What remained knowledge is that blood has compatibility issues and blood type is one of the compatibility factors. The full context was needed to increase the usefulness of this knowledge.
----------------
Conclusive means the current context only allows for one rational conclusion. This conclusion is not infallible, but it's the only reasonable option available. All the alternative ideas have known flaws; they are refuted. There's only one idea left which is not refuted, which could be true, is true as far as we know (no known flaws), and which we should therefore accept. And that is knowledge.
This might be useful as some part of a methodology, but I don't think it is the best approach. Often we are left with conflicting conclusions and we need to act before we can successfully refute all but one. So we work out certainty/risk/reward alternatives and sometimes our chosen action (and that is our purpose for this knowledge in this case - to act) might even be some blend of the alternatives (e.g., "I'm going to assume X, and write code that filters for that, but I'll allocate resources to write error-traps for Y and Z.") In this case our current 'knowledge' is contained as statements of reality being either X, Y or Z given stated contexts which might include probability estimates. Now, if you say, "Yes, you have refuted everything else as being less than that jumbled up probability statement, so it is 'knowledge', then we agree.
----------------
None of this contradicts the progressive character of knowledge. Our knowledge is not frozen and final. We can learn more and better – without limit. We can keep identifying and correcting errors in our ideas and thereby achieve better and better knowledge. (One way knowledge can be better is that it is correct in more contexts and successfully addresses more problems and questions.)
Some pieces of knowledge acquire such high levels of certainty that the chance they will be discovered to wrong in the future is insignificant. We are being rational to 'freeze' some pieces of knowledge for three reasons: 1.) That high level of epistemological certainty, and 2.) As long as we operate as rational beings, we have the option to thaw it out and re-examine it, 3.) If we are right to have frozen it, the result will be a marginally greater effectiveness in progressing on our knowledge structure and in acting from it.


Post 2

Sunday, July 7, 2013 - 12:33pmSanction this postReply
Bookmark
Link
Edit
Elliot,
Peikoff makes what I think is an important mistake. He says that if you don't have knowledge or certainty, you can still judge by the weight of the evidence. This is a standard view held by many non-Objectivists too. I think this is too compromising. I think the choices are knowledge or irrationality. We need knowledge; nothing less will suffice.
But you have said that to be knowledge an idea must be absolute, certain, and conclusive. And we can't look into the future so we will never be able to tell if the idea won't require correction. And you don't believe we can determine truth of an idea without omniscience, which we don't have. So really, you are saying that we always act out of irrationality. I know that you don't intend to say that, but it is a conflict that exists in your statements.

It is a misuse of the term irrationality to say that someone who reasons as the means of making a choice when faced with alternatives for which he has less than conclusive or final knowledge is being irrational.

I say that "weight of the evidence" is the setting of part of the overall context, and if it is right, then it is valid knowledge. I suspect that unless you agree with that, you'd have to also say that there is zero knowledge content in anything in statistics... if you want to remain consistent.
-----------------
I think we can always act on knowledge without contradictions. Knowledge is always possible to man. Not all knowledge instantly, but enough knowledge to act, in time to act. We may not know everything – but we don't need to. We can always know enough to continue life rationally. Living and acting by reason and knowledge is always possible.
I agree with that statement. But it is in conflict with what you said earlier: "I think the choices are knowledge or irrationality. We need knowledge; nothing less will suffice" and given your definition of knowledge.
------------------
"Best survives criticism", and "weight of the evidence", are irrational ways of arbitrarily elevating one flawed idea over another, instead of using reason to come up with a correct idea.
You've created false alternatives. I can use weighted evidence and best survives criticism as methods for creating some kind of probability matrix to govern actions, to judge outcomes, to be able to act when needed, till I can replace one or the both of the conflicting ideas with one that is conclusive.

Here is the thing. There isn't a day in anyone's life where we don't do this automatically. Millions of choices relate to ideas that are so tiny in the scheme of things that it isn't worth the time to parse them out for a final conclusive winner. Should you go out the front door to get to your side yard, or through the back door? Which is better for you at this time. Your context might contain today's weather, distance, time, what you might experience with the different paths, the purpose of your trip, what effect will this decision might have on the choice of the return trip, what you are wearing, who is home, etc., etc., etc. Two competing contenders for the title of knowledge: I know it is better for me, at this time, to use the back door. I know it is better for me, at this time, to use the front door. Guess what - you probably do an automated, weighted-evidence comparison to a myriad number of values and preferences to come up with a winner - and it takes less than a second. The process might be rational or irrational. The conclusion might prove right or wrong or not even come up for evaluation.

We have to 'weight' our use of limited resources, particularly time, to give no more nor less than adequate rational attention to the risk, reward, and certainty factors relative to the context needing an action.

Post 3

Sunday, July 7, 2013 - 12:35pmSanction this postReply
Bookmark
Link
Edit
Good stuff, Elliot.
Certain means we should act on it instead of hesitating. We should follow its implications and use it, rather than sitting around doubting,
Interesting point. But keep in mind what Steve said, you can be certain in 2 different ways: psychological certainty (the kind we share with animals) and epistemological certainty (the kind unique to man).

Knowledge has to be knowledge about something, with some purpose. This implies: if you have an answer to a question, and then in the future you learn more, the old answer still answers the old question. It's still knowledge in its original, intended context.

Consider blood types. People wanted to know which blood transfusions were safe (among other questions) and they created some knowledge of A, B, AB and O blood types. Later they found out more. Actually there is A+, A-, B+, B-, AB+, AB-, O+ and O-. It was proper to act on the earlier knowledge in its context. It would not be proper to act on it today ...

You know, retroactively, you can envision a perfect way for hematology researchers to have approached the blood type issue. The perfect way to have approached it would have been to say that 4 broad groups of blood types exist that are incompatible with each other in the inter-group sense, but may turn out to be fully or partially compatible with themselves in the intra-group sense.

If researchers had said that, then they would never be wrong -- the future empirical data (of the + and the -) would have never contradicted them. The upshot is that maybe we can learn to make predictions like that -- predictions that are 100% guaranteed to turn out to be correct in the future.

[Twilight Zone theme music]

:-)

One thing to learn here is that a false idea can be knowledge. The idea that all B type blood is compatible is contextual knowledge. It was always false, as a matter of fact, and the mistake got some people killed. Yet it was still knowledge. How can that be?

Perfection is not the standard of knowledge.


But like Steve said, the original blood type conjecture, however bold, contained a failure related to context. Instead of saying that all B type blood is compatible, researchers should have said that B type blood is more compatible with B type blood than it is with A type blood. By explicitly making the conjecture relational -- in order to reflect or acknowledge the fact that human knowledge is relational -- you immunize your conjecture from being overturned by future knowledge.

The downside is that a conjecture that is more bold than that -- one with firm, unwavering boundary conditions (such as "A goes with A, and B goes with B, and that's it, and that's all!"), leads more easily to the origination of human action. Having firm, unwavering boundary conditions is a good thing, but having them at the expense of truth is disastrous.

And not all false ideas are equally good.
Aaaaaaaaaaaaaaaaaaaaaaaagh [mimicking a running motion]! Run away! Run away! I'm not going to touch this one with a 10-foot pole.

:-)

All the alternative ideas have known flaws; they are refuted. There's only one idea left which is not refuted, which could be true, is true as far as we know (no known flaws), and which we should therefore accept. And that is knowledge.
I disagree. For example, if you start with just Robinson Crusoe and Friday, and Crusoe convinces Friday that Protestant Christianity is true, you don't necessarily have knowledge there -- even if everyone alive on the island accepts it. Acceptance (i.e., being ignorant of flaws) isn't indicative of knowledge. Knowledge is something more than the mere personal or social ignorance of flaw.

He [Peikoff] says that if you don't have knowledge or certainty, you can still judge by the weight of the evidence. This is a standard view held by many non-Objectivists too. I think this is too compromising. I think the choices are knowledge or irrationality. We need knowledge; nothing less will suffice.

The weight of the evidence is no good. Either you have knowledge or you don't. If it's not knowledge, it's not worth anything.

This harkens back to a reply I gave to you earlier -- that we define knowledge differently. Let's call it a strict view of knowledge and a non-strict view. Strict knowledge is stuff that cannot turn out to be false: stuff like the absence of round squares, no part being greater than the whole, no socialism or communism ever outperforming capitalism, etc. -- stuff you can prove not just beyond reasonable doubt, but beyond the possibility of doubt (for a rational thinker). Non-strict knowledge, however, can be what is merely contextually useful.

On the non-strict view, Peikoff contradicted himself by saying that you can simultaneously not have knowledge but still weigh the evidence -- but on my view of the matter (the strict view), he did not contradict himself in the least.

Suppose you're considering two ideas that contradict each other. And you don't have a conclusive answer, you don't have knowledge of which is right. Then using or believing either one is irrational. No "weight of the evidence" or anything else can change this.

Don't pick a side when you know there is a contradiction but have not rationally resolved it. Resolve it; create knowledge; learn; think; figure it out. Neither idea being considered is good enough to address the contradiction or refute the other idea – so you know they are both flawed. Don't hope or pray that acting on a known-to-be-flawed idea will work out anyway. Irrationality doesn't work.

That's not good enough.
I love that analogy. Thank you for that. Wow! Good stuff.**


Ed

**Reminds me of Robert Audi lambasting coherence theory epistemologists with the notion that opposite, contradictory, mutually-exclusive conceptual chains can both equally account for known facts -- even though such opposites can never both be true at the same time and in the same respect. We can be sure that one of them is wrong (because contradictions don't ever exist) but yet coherence theory epistemologists cannot ever tell you which one it is.

(Edited by Ed Thompson on 7/07, 12:41pm)


Post 4

Sunday, July 7, 2013 - 1:04pmSanction this postReply
Bookmark
Link
Edit
Ed,
... psychological certainty (the kind we share with animals) and epistemological certainty (the kind unique to man).
I'd say that our human psychological certainty is also unique. Other animals might have something similar - some mental/behavioral dissonance arising out a confusing context, but it couldn't be the same as ours, given the way our psychology unfolds, and develops, as a product of human choice, reasoning, conceptual level values, etc.
---------------
If researchers had said that [4 broad groups of blood types exist that are..., then they would never be wrong -- the future empirical data (of the + and the -) would have never contradicted them.
Context is king and you are right that proper wording in important. For example, they would have been even safer to say, "The 4 broad groups THAT WE KNOW OF AT THIS TIME..." :-)
---------------
Let's call it a strict view of knowledge and a non-strict view.
Or, it might be better to say that the so-called strict view is actually a subcategory of the larger collection know as "knowledge." But that doesn't fit the definition of Knowledge that Elliot is holding tight to.

Post 5

Sunday, July 7, 2013 - 2:36pmSanction this postReply
Bookmark
Link
Edit
Steve,

I'd say that our human psychological certainty is also unique.
Perhaps I should have said, in the interest of stating something more impervious to future criticism, that our psychological certainty is that specific kind of a certainty which is more like what animals have (than is our other kind of certainty: epistemological certainty). By basing my statement on existing relations -- psychological compared to epistemological (against the background of animal) -- I do more justice to how it is that human knowledge is laid out.

:-)

Or, it might be better to say that the so-called strict view is actually a subcategory of the larger collection know as "knowledge." But that doesn't fit the definition of Knowledge that Elliot is holding tight to.
I'm not so sure that the strict view -- where knowledge is viewed as being necessarily true, rather than being something which is contingent -- is peculiar to me. In ITOE, Peikoff argues for something very similar:
Truths about metaphysical and about man-made facts are learned and validated by the same process: by observation; and, qua truths, both are equally necessary. Some facts are not necessary, but all truths are.

Truth is the identification of a fact of reality. Whether the fact in question is metaphysical or man-made, the fact determines the truth: if the fact exists, there is no alternative in regard to what is true. For instance, the fact that the U.S. has 50 states was not metaphysically necessary—but as long as this is men’s choice, the proposition that “The U.S. has 50 states” is necessarily true. A true proposition must describe the facts as they are. In this sense, a “necessary truth” is a redundancy, and a “contingent truth” a self-contradiction.
http://aynrandlexicon.com/lexicon/truth.html

Ed


Post 6

Monday, July 8, 2013 - 9:00amSanction this postReply
Bookmark
Link
Edit
I'm not sure how Rand or Peikoff deal with epistemological certainty.


Peikoff says that "certain knowledge" and "knowledge" are the same thing. And that it means knowledge must be conclusive. You have to reach certainty aka conclusiveness to achieve knowledge. (The first half of the essay is basically a summary of Peikoff written so that it'd also be acceptable to Popperians.)

This is a bit of an academic standard for knowledge. We can define 'knowledge' in this fashion, and I have no objections to that. But in practice we have to act, day in and day out, on 'knowledge' that is incomplete, unchecked, flawed to a minor degree, etc.


All the "in practice" stuff is dealt with by context. The standard of knowledge is not an impossible perfection, and the context of our knowledge is human life, in practice, day in and day out, etc...

But accepting contradictions or arbitrary exceptions is not practical. Ever.

If you understand two ideas contradict, don't act on both pragmatically, claiming perfection is not possible to man so it's OK. It's not OK. If you know of a contradiction you better figure out something better.

If two ideas contradict, don't just act on one either. Why that one, instead of the other? You must rationally address the contradiction. You must only act on ideas you don't already know are wrong. That's what rationality requires; rationality is absolute and uncompromising; there is no other way.

Someone is researching X and stumbles across a relationship between entities that is unrelated to any problem or question he, or anyone in his field, is currently addressing.


Why did he notice it at all? Why did it stand out to him? Why was it worth remembering, recording, etc? Because it does have some relevance to some problem of some interest to him.

Further, there is a statement you made that seems to restrict or redefine "context" - "It's still knowledge in its original, intended context."


This is Peikoff's idea, I wasn't really expecting objections... But I think it's true. The original value (knowledge) is still there, even later when we learn better. Or put my way: if idea A answers issue B, it still answers it even when we learn that C is a better issue and that A is inadequate to answer C.

I disagree.


You're disagreeing with Peikoff. I don't really care to argue the point currently. Except also you say, "What remained knowledge is that blood has compatibility issues and blood type is one of the compatibility factors." which seems more like you agree than disagree.

This might be useful as some part of a methodology, but I don't think it is the best approach.


So, let's step back and look at the bigger picture here. I came here saying that Popper and Objectivism are more compatible than people realize. And I'm told no no, that's wrong, etc (Well, OK, most of the shouting was at other Objectivist forums, but I still think disagreement was the general reaction here).

Then what happens? Well I write some stuff straight out of Peikoff which is Popper compatible. And what objection do I get? Peikoff is wrong.

One of the things I notice here is that if Peikoff is wrong that doesn't make Popper and Peikoff incompatible. Maybe they were both wrong together. And I agree with them, but you don't. That'd be kind of funny, right? Like, who is the outsider, now?

I wasn't really looking to defend Objectivism as part of the Popper conversation. Maybe I'll have to. Maybe, hopefully, someone else here could help me out and defend Peikoff for me? :) I'd like to focus on some other things currently.

Often we are left with conflicting conclusions and we need to act before we can successfully refute all but one.


Suppose for a moment there was a method by which we could always act on a single conclusion with no conflict. Would that be good? Would that be awesome? Would that be an improvement on Objectivist epistemology?

Now, if you say, "Yes, you have refuted everything else as being less than that jumbled up probability statement, so it is 'knowledge', then we agree.


I would have said "wrong" instead of "less than". But yes, if you see something wrong with each original statement, but nothing wrong with your new statement (whether it is a "blend" or not), then you have one non-refuted statement and no conflict.

2.) As long as we operate as rational beings, we have the option to thaw it out and re-examine it


If you can unfreeze, what meaning does "freeze" even have? (Certainly not ITOE's meaning where "frozen" is a bad and permanent thing. Which is why I chose the word at all, I thought if I used it the same way that Rand did that would be easier to understand than if I used some other word.)

But you have said that to be knowledge an idea must be absolute, certain, and conclusive. And we can't look into the future so we will never be able to tell if the idea won't require correction. And you don't believe we can determine truth of an idea without omniscience, which we don't have. So really, you are saying that we always act out of irrationality. I know that you don't intend to say that, but it is a conflict that exists in your statements.


No. Perfection or infallibility is not the standard of knowledge or rationality. Lacking those doesn't mean we're acting irrationally. What I'm hearing is you think that "absolute, "certain" and "conclusive" mean infallible. But that isn't how Peikoff/Objectivism means them. (This is one of the reasons I don't think they are very good terminology btw. Too open to misinterpretation as infallibilist.)

But, OK, if you think certainty/etc contradicts fallible rational knowledge, then what is your position? Do you reject certainty, or reject knowledge, or reject fallibility?

It is a misuse of the term irrationality to say that someone who reasons as the means of making a choice when faced with alternatives for which he has less than conclusive or final knowledge is being irrational.


I'm saying the correct method of reasoning will reach an idea with no contradictions, no conflicts, no known flaws. Anything less means you have multiple contradicting ideas, and you don't address the contradiction but act anyway in spite of it -- it means acting on an idea that you know is flawed, despite knowing it's flawed. It means going against your own mind (since you judge an idea is wrong, but act on it anyway).

I say that "weight of the evidence" is the setting of part of the overall context, and if it is right, then it is valid knowledge. I suspect that unless you agree with that, you'd have to also say that there is zero knowledge content in anything in statistics... if you want to remain consistent.


There's nothing wrong with statistics as such. But they aren't epistemology. They work fine in domains where they apply. And epistemology can deal with statistical theories just the same as with any other theories.

I agree with that statement. But it is in conflict with what you said earlier


What is the conflict?

You've created false alternatives. I can use weighted evidence and best survives criticism as methods for creating some kind of probability matrix to govern actions, to judge outcomes, to be able to act when needed, till I can replace one or the both of the conflicting ideas with one that is conclusive.


Sounds like you want to be a Bayesian or something. I'm well aware people think that kind of stuff works. I think it doesn't and have arguments. But first: Where, may I ask, is Objectivism's defense of this approach, and full explanation of how it can and does work?

Millions of choices relate to ideas that are so tiny in the scheme of things that it isn't worth the time to parse them out for a final conclusive winner.


Why are you willing to accept automatic lightning fast irrationality, but not automatic lightning fast rational conclusive thinking? If there's going to be an automatic light fast unconscious thought process -- ok no problem -- why think it's the wrong one? (Don't its many successes indicate its the right, rational one?)

I think the problem here, actually, is that you don't know how rational, conclusive thinking works. That's fine in that I didn't explain it yet. But you're assuming things about it, e.g. that it's unable to deal with limits on time and attention that some topics should get. But it has to deal with that -- and does. When you assume it doesn't you're really just assuming it's wrong and doesn't work -- rather than asking how it does work.

We have to 'weight' our use of limited resources, particularly time, to give no more nor less than adequate rational attention to the risk, reward, and certainty factors relative to the context needing an action.


If you do something along those lines in a particular case, and it is the right best thing to do, and you know it, then what contradicts it? What conflict remains? What isn't conclusive?

Post 7

Monday, July 8, 2013 - 9:33amSanction this postReply
Bookmark
Link
Edit
Ed,

I'm not so sure that the strict view -- where knowledge is viewed as being necessarily true, rather than being something which is contingent -- is peculiar to me.


The standard view in epistemology, agreed on by most everyone, is that knowledge is justified, true belief. Those are the three standard requirements for knowledge. "True" is one of them. (And this is infallibilist.)

You are not alone – but you are contradicting Objectivism, which says that perfection/omniscience/infallibilism is not the standard of knowledge (and thereby boldly rejects most prior thinking about epistemology).

I know you were focussed on necessary vs contingent truth. But I think the underlying claim that knowledge has to be true is the bigger deal.

psychological certainty (the kind we share with animals)


Ugh. Men are not like animals. We're different and better. Animals do not do rational thinking. Men do. If psychological certainty is anything like animals, I want no part of it. I consider that an especially damning thing to say about it.

I doubt you could find any quotes where Rand agrees with you about this. Please correct me if I'm wrong. I'd be interested to see them.

The upshot is that maybe we can learn to make predictions like that -- predictions that are 100% guaranteed to turn out to be correct in the future.


So you aren't really a fallibilist. You want 100% guarantees – omniscience. Whether it's possible or not, you want it and think it'd be good if you could get it. You have the spirit of an infallibilist. You may have to accept you can't have it, and be a disappointed infallibilist, but infallibilism is your hope.

Maybe you'll deny infallibilism. But I don't think I'll believe you. "100% guaranteed" is not ambiguous.

One thing I'd highlight here is that you disagree with Popper more than Objectivism does.

you immunize your conjecture from being overturned by future knowledge


More infallibilism. Trying to immunize against all possible future knowledge, criticism and change, creating a permanently frozen conjecture, forever.

Strict knowledge is stuff that cannot turn out to be false: stuff like the absence of round squares, no part being greater than the whole, no socialism or communism ever outperforming capitalism, etc. -- stuff you can prove not just beyond reasonable doubt, but beyond the possibility of doubt (for a rational thinker).


You are advocating infallibilism. But Objectivism and Popper reject infallibilism. Objectivism does not require stuff that "cannot turn out to be false" for knowledge – rather it says knowledge is progressive, we can learn more, knowledge is contextual, perfection isn't the standard of knowledge, man is fallible, etc, etc

Blood Types

I wasn't looking to defend Peikoff and his example. I was trying to show how much Popper and Peikoff agree. When you criticize Peikoff's material it's a change of topic. Maybe necessary but I kind of hope not. Here is what Peikoff says in OPAR after giving the blood type example:

The principle here is evident: since a later discovery rests hierarchically on earlier knowledge, it cannot contradict its own base. The qualified formulation in no way clashes with the initial proposition, viz.: “Within the context of the circumstances so far known, A bloods are compatible” This proposition represented real knowledge when it was first reached, and it still does so; in fact, like all properly formulated truths, this truth is immutable. Within the context initially specified, A bloods are and always will be compatible.

The appearance of a contradiction between new knowledge and old derives from a single source: context-dropping.


(And it goes on, read the whole thing.)

Aaaaaaaaaaaaaaaaaaaaaaaagh [mimicking a running motion]! Run away! Run away! I'm not going to touch this one with a 10-foot pole.


Why?

I love that analogy.


What analogy?

**Reminds me of Robert Audi lambasting coherence theory epistemologists with the notion that opposite, contradictory, mutually-exclusive conceptual chains can both equally account for known facts -- even though such opposites can never both be true at the same time and in the same respect. We can be sure that one of them is wrong (because contradictions don't ever exist) but yet coherence theory epistemologists cannot ever tell you which one it is.


Two contradicting ideas can both equally account for known facts. That's easy. They could both (retrospectively) predict the same set of known facts. And that doesn't tell you which is right.

But I can tell you which one is right: they are both wrong (considered as a whole).

Why? Because accounting for (empirical, observational) facts is not the entire task of ideas. A better idea (really group of ideas) would be able to deal with its rival(s). Neither idea (group of ideas) is good enough since something contradicts it and it's unable to refute the contradicting idea(s). We should look for ideas that are good enough, powerful enough, thorough enough that they can address stuff that tries to contradict them.

If some ideas don't tell us why a contradiction of them is wrong (or provide enough hints for us to work it out), then it's not good enough. Every contradiction must be dealt with.

Post 8

Monday, July 8, 2013 - 9:36amSanction this postReply
Bookmark
Link
Edit
PS The title got cutoff or something. It's meant to be "Epistemology Without Weights and the Mistake Objectivism and Critical Rationalism Both Made"

Could any admin fix it? If it has to be shorter, "Epistemology Without Weights and an Objectivist Mistake" would be ok.

Post 9

Monday, July 8, 2013 - 10:12amSanction this postReply
Bookmark
Link
Edit
Elliot,

"Knowledge” is . . . a mental grasp of a fact(s) of reality, reached either by perceptual observation or by a process of reason based on perceptual observation. Ayn Rand

“Certain” represents an assessment of the evidence for a conclusion; it is usually contrasted with two other broad types of assessment: “possible” and “probable.” . . . Peikoff

Idea X is “certain” if, in a given context of knowledge, the evidence for X is conclusive. In such a context, all the evidence supports X and there is no evidence to support any alternative . . . . Peikoff

You cannot challenge a claim to certainty by means of an arbitrary declaration of a counter-possibility, . . . you cannot manufacture possibilities without evidence . . . . Peikoff

All the main attacks on certainty depend on evading its contextual character . . . . Peikoff
-------------

What I see from these quotes regarding certainty and knowledge is somewhat different than what you are saying. According to Rand, knowledge is when a mental entity correctly mirrors reality - when the belief matches the facts of reality. But there is no magical genie to come out and ring a bell to indicate that our cogitations have taken us to just that position. We don't know if our beliefs are knowledge in an omniscient fashion. We form our certainty based upon our assessment of the evidence - individually.

In our minds we have certainty when we have conclusiveness (as you and Peikoff have said). But conclusiveness has, as its context, our individual understandings, inside our own mind. In other words, for Peikoff, epistemological certainty and psychological certainty are one and the same. I am certain that 2 + 2 = 4. My certainty arises out of there being no claims to the contrary that merit any consideration. I'm not a mathematician, so if there were some esoteric claims to the contrary, I wouldn't be aware of them so they might exist, but yet not effect my conclusiveness. In this way, ignorance can be part of the context for certainty. (Psychological certainty would have still other aspects, those related to its emotional intensity, etc.)

Where Peikoff says that certain knowledge and knowledge are the same, I'd totally disagree. Certainty is, by his descriptions, is a result of a mental process where alternatives, if any, are examined and discarded till there is but one conclusion left in a context. Knowledge is a mental grasp of a fact(s) of reality. A person can hold a contradiction in their mind that they don't yet have a resolution to. One side of that conflict could be the fact of reality (knowledge) and the other side would be a false claim. To reach certainty requires choosing one side. If they choose the wrong side, they have certainty, but no knowledge. They had knowledge but no certainty before they made the bad choice.
---------------
I said: "Someone is researching X and stumbles across a relationship between entities that is unrelated to any problem or question he, or anyone in his field, is currently addressing."
You replied: "Why did he notice it at all? Why did it stand out to him? Why was it worth remembering, recording, etc? Because it does have some relevance to some problem of some interest to him."
No. He might have noticed it because he is a programmer and his mind has learned to spot patterns. And like a dog that fetches sticks even when you don't want it to, the mind is pointing at a pattern that isn't related to anything worth remembering (we don't always 'remember' in proportion to the worth of the material remembered), or relevant to his interests. We take in nearly everything within the range of our sensory limits, and only some of it is rejected at the sensory level, some is rejected at the conceptual level after a rapid examination for interest, but other things are kept longer... we are all mental pack rats so some degree. None of us is perfect at knowing what to keep and what to reject at the first instance that we conceive of it.
--------------
Regarding what you have presented: "I still think disagreement was the general reaction here"
I don't think so. The nature of these forums is to 'chew' on ideas and in practice that means finding differences of opinion and presenting them. Look at all of the posts here, and in other Objectivist forums, where one Objectivist disagrees with another.
--------------
Suppose for a moment there was a method by which we could always act on a single conclusion with no conflict. Would that be good? Would that be awesome? Would that be an improvement on Objectivist epistemology?
Suppose there was a million dollars under my couch that I hadn't been aware of before. Wouldn't THAT be awesome? Show me the method and we will see.
--------------
If you can unfreeze, what meaning does "freeze" even have?
Well, if someone makes an argument regarding something you are currently holding as conclusive, then you would unfreeze so that you can make an examination. After you have made any corrections, if needed, then you can freeze it for the time being. The purpose of the metaphorical 'freeze' 'unfreeze' is the mechanism of getting in and out of the state of certainty. The ability to act on knowledge requires a degree of certainty. If you are unfrozen, you don't (by your definition of knowledge requiring conclusiveness) have knowledge and therefore couldn't have certainty.

You, and perhaps Rand, in the context of her writing, are using frozen as meaning forever - not just as an accident of time, or a measure of time, but as the inability or choice to never consider any alternatives. I've never argued for that.
--------------
I'm saying the correct method of reasoning will reach an idea with no contradictions, no conflicts, no known flaws.
Yes, true. But it is neither easy, nor automatic, nor guaranteed to be completed in a reasonable time. In the meantime, even without methodological errors, one lives without complete knowledge and struggles to find and resolve conflicts.
--------------

Out of time... maybe more on this later.

Post 10

Monday, July 8, 2013 - 9:13pmSanction this postReply
Bookmark
Link
Edit
Elliot,
I'm not so sure that the strict view -- where knowledge is viewed as being necessarily true, rather than being something which is contingent -- is peculiar to me.


The standard view in epistemology, agreed on by most everyone, is that knowledge is justified, true belief. Those are the three standard requirements for knowledge. "True" is one of them. (And this is infallibilist.)

You are not alone – but you are contradicting Objectivism, which says that perfection/omniscience/infallibilism is not the standard of knowledge
Nice bait-n-switch, Elliot! You're pretty good at rhetoric! However, I'm no push-over, myself. Here is your thinking, distilled:

1) Ed says knowledge is true.
2) But that's infallibilist.
3) Objectivism says infallibilism is not the standard of knowledge.
---------------------------------
Therefore, Ed is contradicting Objectivism.

Premise (1) is true, but premise (2) is not, which makes premise (3) irrelevant. When "Objectivism" said that infallibilism is not the standard, Objectivism was talking about infallible "knowers", and not about infallible (i.e., true) "knowledge." Here's what Objectivism has said on the matter:

“Knowledge” is . . . a mental grasp of a fact(s) of reality ...
http://aynrandlexicon.com/lexicon/knowledge.html

Truth is the identification of a fact of reality.
http://aynrandlexicon.com/lexicon/truth.html

Truth is the recognition of reality. (This is known as the correspondence theory of truth.)
http://aynrandlexicon.com/lexicon/truth.html

According to Objectivism, there isn't a gap between knowledge and truth.

On animal certainty
If psychological certainty is anything like animals, I want no part of it. I consider that an especially damning thing to say about it.

I doubt you could find any quotes where Rand agrees with you about this. Please correct me if I'm wrong. I'd be interested to see them.
Good point, psychological certainty -- an absence of felt doubt -- can get you in a lot of trouble. Animals don't doubt. I will try to find a quote where Rand agrees with me about this, but I think it was Peikoff who said it in relation to perception. He said man has the capacity to doubt his own eyes, but animals don't -- i.e., for an animal, seeing is believing. Though it is merely verification (a dirty word, according to Popper) my personal experience with pets, included controlled trials, confirms this conjecture.


The upshot is that maybe we can learn to make predictions like that -- predictions that are 100% guaranteed to turn out to be correct in the future.
So you aren't really a fallibilist. You want 100% guarantees – omniscience. Whether it's possible or not, you want it and think it'd be good if you could get it. You have the spirit of an infallibilist. You may have to accept you can't have it, and be a disappointed infallibilist, but infallibilism is your hope.

I would like to think that I could prove you wrong about that. In fact, I'd like to prove you unequivocally and 100% wrong about it.

:-)

Maybe you'll deny infallibilism. But I don't think I'll believe you. "100% guaranteed" is not ambiguous.
What are the odds that you won't believe me? 50%, 75%, 100%? Could you ever be 100% sure whether you will disbelieve me or not (100% sure of the future)?

:-)

you immunize your conjecture from being overturned by future knowledge
More infallibilism. Trying to immunize against all possible future knowledge, criticism and change, creating a permanently frozen conjecture, forever.



Au contraire. The way to make something more certain -- the way to make it immutable -- is to make it more either more vague or more properly formulated (qualified). But the worship of infallibilism would not include refining or even watering things down like that. Even Popper agreed that anyone can make robust conjectures, which is why he said they should make bold conjectures (which, by their nature, are less robust). What's so wrong with formulating truths so that are immutable, anyway?

What have you got against being so careful and reasonable when thinking or communicating about the world?



Strict knowledge is stuff that cannot turn out to be false: stuff like the absence of round squares, no part being greater than the whole, no socialism or communism ever outperforming capitalism, etc. -- stuff you can prove not just beyond reasonable doubt, but beyond the possibility of doubt (for a rational thinker).
You are advocating infallibilism. But Objectivism and Popper reject infallibilism.

See the first response in this post.

The qualified formulation in no way clashes with the initial proposition, viz.: “Within the context of the circumstances so far known, A bloods are compatible” This proposition represented real knowledge when it was first reached, and it still does so; in fact, like all properly formulated truths, this truth is immutable. Within the context initially specified, A bloods are and always will be compatible.
That's what I was just saying (2 responses above this one).

And not all false ideas are equally good.
Aaaaaaaaaaaaaaaaaaaaaaaagh ...

While literally true, the sentence is borderline arbitrary, like saying that not all unicorns are equally imaginary. Also, you contradicted yourself by saying that there is some merit in falsehood. Elsewhere, you had raked falsehood over the coals as being irrational (something with which I agree).

I love that analogy.
What analogy?


I'll recreate what I had in mind:
Suppose Bob is considering two ideas that contradict each other. And Bob doesn't have a conclusive answer, Bob doesn't have knowledge of which is right. Bob using or believing either one is irrational. No "weight of the evidence" or anything else can change this. ...


Two contradicting ideas can both equally account for known facts. That's easy. They could both (retrospectively) predict the same set of known facts. And that doesn't tell you which is right.

But I can tell you which one is right: they are both wrong (considered as a whole).
??? Oh ... you mean the Coherence Theory of Truth isn't right, don't you?

Why? Because accounting for (empirical, observational) facts is not the entire task of ideas. A better idea (really group of ideas) would be able to deal with its rival(s). Neither idea (group of ideas) is good enough since something contradicts it and it's unable to refute the contradicting idea(s). We should look for ideas that are good enough, powerful enough, thorough enough that they can address stuff that tries to contradict them.
Okay, I get it. Good point, Elliot. The best ideas are those that can refute their rivals. Good stuff.

Ed

(Edited by Ed Thompson on 7/08, 10:06pm)


Post 11

Monday, July 8, 2013 - 10:03pmSanction this postReply
Bookmark
Link
Edit
For clarity, here are 2 groups of conjectures -- one group is knowledge and the other is opinion:

Knowledge
2 + 2 = 4
Round squares don't exist.
Climates change.
Canada is north of Mexico.
Elephants are bigger than fleas.
You can't roll a "13" with 2 normal dice.
The Morning Star and the Evening Star are the same celestial body: Venus.


Opinion
Two heads are better than one.

Big Foot doesn't exist.

Man is the cause of recent global warming.
Alternatives:
Man is the cause of more than one-fourth of recent global warming.
Man is the cause of more than one-half of recent global warming.
Man is the cause of more than three-fourths of recent global warming.

Canada is a better place to raise your kids than Mexico.

Elephants are happier than fleas.

If I roll a pair of dice 12 times, then I'll get a "10" exactly once -- and I'll get a "7" more times than any other number.

"... the average temperature of Venus is 460 degrees Celsius. ... The temperature on Venus does not vary like it does on our home world. It is 460 degrees day or night, at the poles or at the equator."
[source: http://www.universetoday.com/14306/temperature-of-venus/]

Ed

Bonus Material
Mortimer Adler on Karl Popper (Ten Philosophical Mistakes, p 101-):
According to Sir Karl Popper, one of the most eminent philosophers of science in our time, the line of demarcation between knowledge and mere opinion is determined by one criterion: falsifiability by empirical evidence, by observed phenomena. ... Though it is couched in somewhat different terms, Popper thus repeats the conclusion Hume reached in his Enquiry. The reasons for reaching the opposite conclusion are as follows.

In the first place, what has been overlooked is the distinction between common and special experience. The empirical evidence to which science and history appeal is evidence that consists in observed data produced by methodical investigation ...

In sharp contrast to such special experience, available only to those who engage in investigation, there is the everyday, ordinary experience that all of us have during the waking hours of our life. ...


Sanction: 6, No Sanction: 0
Sanction: 6, No Sanction: 0
Post 12

Wednesday, July 10, 2013 - 4:57pmSanction this postReply
Bookmark
Link
Edit

Two contradicting ideas can both equally account for known facts. That's easy. They could both (retrospectively) predict the same set of known facts. And that doesn't tell you which is right.

I double dog dare you to offer an example.



According to Sir Karl Popper, one of the most eminent philosophers of science in our time, ...  
what has been overlooked is the distinction between common and special experience.
... In sharp contrast to such special experience, available only to those who engage in investigation, there is the everyday, ordinary experience ... 

 

When I want baloney, I go to the grocery.  No distinction exists between the special investigations of a researcher and everyday life.  That other people claim such distinctions is part of the explanation for all the misery on Earth.  We Objectivists are here to correct that mistake.


... unless, of course, you have an actual concrete example...  (something in short supply in all of your otherwise fascinating posts.)


Post 13

Thursday, July 11, 2013 - 2:12pmSanction this postReply
Bookmark
Link
Edit
According to Sir Karl Popper, one of the most eminent philosophers of science in our time, the line of demarcation between knowledge and mere opinion is determined by one criterion: falsifiability by empirical evidence, by observed phenomena.

This is not even close to Popper's position. It's slander.

I agree with Michael's reply to this junk. I especially liked, "We Objectivists are here to correct that mistake." I agree with that :)

"In the first place, what has been overlooked is the distinction between common and special experience. " -- i'm not sure if this is a misstatement of popper's position or meant to be a criticism of popper for overlooking it. in any case it's not popper's position and this division between common and special experience is junk. (if someone here think it's good, please explain why)

Two contradicting ideas can both equally account for known facts. That's easy. They could both (retrospectively) predict the same set of known facts. And that doesn't tell you which is right.

I double dog dare you to offer an example.

Consider this hypothetical set of known facts (observation data):

We know of 500 rocks and have observations of them. Each rock has a weight of less than 1000 kg. (We also have various other observations of them which aren't relevant to my example.)

The following theories equally account for this but contradict each other:

1) all rocks weigh less than 2000 kg
2) all rocks weigh less than 2000 kg except there's 5 rocks that weigh more
3) all rocks weigh less than 3000 kg (including more than 10 in the 2100-2200 kg range)



regarding other stuff (in this thread and others), it's on my todo list to answer some more things.
(Edited by Elliot Temple on 7/11, 2:13pm)


Post 14

Thursday, July 11, 2013 - 7:14pmSanction this postReply
Bookmark
Link
Edit
Mike (and Elliot),
According to Sir Karl Popper, one of the most eminent philosophers of science in our time, ...  
what has been overlooked is the distinction between common and special experience.
... In sharp contrast to such special experience, available only to those who engage in investigation, there is the everyday, ordinary experience ... 
When I want baloney, I go to the grocery.  No distinction exists between the special investigations of a researcher and everyday life.  That other people claim such distinctions is part of the explanation for all the misery on Earth.  We Objectivists are here to correct that mistake.

Mike, I couldn't tell 2 things:

1) Whether you were addressing me or Elliot
2) Whether you were operating under the assumption that Popper makes the noted distinction or not

I'll clarify things in the next post.

Ed


Post 15

Thursday, July 11, 2013 - 8:40pmSanction this postReply
Bookmark
Link
Edit
Here's the quote again:
According to Sir Karl Popper, one of the most eminent philosophers of science in our time, the line of demarcation between knowledge and mere opinion is determined by one criterion: falsifiability by empirical evidence, by observed phenomena. ... Though it is couched in somewhat different terms, Popper thus repeats the conclusion Hume reached in his Enquiry. The reasons for reaching the opposite conclusion are as follows.

In the first place, what has been overlooked is the distinction between common and special experience. The empirical evidence to which science and history appeal is evidence that consists in observed data produced by methodical investigation ...

In sharp contrast to such special experience, available only to those who engage in investigation, there is the everyday, ordinary experience that all of us have during the waking hours of our life. ...
Adler is saying that Popper followed in Hume's footsteps regarding what should be committed to the flames as sophistry and illusion. Hume had said that books contain a lot of things, but that there are only 2 kinds of knowledge: math and (scientific) experiment -- so if a book doesn't contain either, it is mere opinion. Adler then produces a 2-part counter-argument to the conclusion reached by Hume first, and then similarly by Popper later:


1) not all human knowledge is gained by orthodox, official, scientific experiments
2) the empirical evidence that you experience is not the only thing that can refute conjectures: rational argument can, too

As to (1), it is already alluded to in the quote, where Adler reveals that we are learning things just by going about our daily lives.
As to (2), it feeds off of (1) to reveal that there can be a common experience which might lead you to general truths, and contradiction of general truths could then reveal that a conjecture was false (purely from reasoning well about it).

An easy example is time-sensitive, taking into account a snapshot of human knowledge while it was still in its infancy, but growing ...

Counterpoint 1 -- the appeal to the common experience of man
---------------------------------------------------
Caveman 1
You know, Rutherford, the reason that rain falls is because of my crazy dancing and chanting, while I look up at the sky and wave the tibular and fibular bones of large herbivores.

Caveman 2
That cannot be correct, Winston, because in my experience, as well as in the experience of others with whom I frequently meet for dining and conversation -- it pretty much rains whenever it wants to.

Caveman 1
Au contraire, my good man. Behold, and I will show you that my bold conjecture corresponds to reality! ... [starts dancing crazily]

Caveman 2
But there's not even a cloud in the sky, Winston!

Caveman 1
Clouds are impertinent to climate phenomena. My dancing is everything. You won't believe it. It is like magic or something.

[no rain comes; Caveman 1 grows tired]
---------------------------------------------------


Counterpoint 2 -- conjecture refutation via rational argument
---------------------------------------------------
Caveman 1
You know, Rutherford, the Morning Star and the Evening Star are two different celestial bodies.

Caveman 2
That cannot be correct, Winston, because their orbital patterns are mirror images of each other, placing them (in different times) at precisely where they'd be if they were both just one single celestial body and -- because of logic -- 2 material bodies cannot both occupy the same space at the same time. That would be a contradiction. Your conjecture involves positing a contradiction, and can therefore be thrown out before witnessing any more empirical evidence on the matter.

Caveman 1
Well, what about this stone here? I think that I can get blood from a stone. I'm going to set up experiments that ...

Caveman 2
Don't waste your time, Winston, I can tell you before the experiments are run -- that you will not ever get blood from a stone. It has to do with what stones are -- i.e., the fundamental nature of stones -- along with what blood is.

Caveman 1
Rutherford, are you telling me that the nature of something limits what kind of actions it could possibly perform?

Caveman 2
Yes, Winston. A is A. Existence exists, and existence is identity.
---------------------------------------------------

Ed

(Edited by Ed Thompson on 7/11, 8:42pm)


Sanction: 12, No Sanction: 0
Sanction: 12, No Sanction: 0
Sanction: 12, No Sanction: 0
Post 16

Friday, July 12, 2013 - 8:19amSanction this postReply
Bookmark
Link
Edit
"This criterion of demarcation between empirical and non-empirical theories I have also called the criterion of falsifiability or the criterion of refutability. It does not imply that irrefutable theories are false. Nor does it imply that they are meaningless. But it does imply that, as long as we cannot describe what a possible refutation of a certain theory would be like, that theory may be regarded as laying outside the field of empirical science." (Karl Popper, The Myth of the Framework, p. 88)

logical vs empirical. Kantian influenced. If you can't test it, it's not "empirical" so nobody knows. What's the consequence of this idea? What does this allow into the name of science? Observational studies concerning claims of telepathy, clairvoyance, contact with ghosts...If he wanted to keep out pseudoscience, he didn't do it right.

Post 17

Friday, July 12, 2013 - 9:34pmSanction this postReply
Bookmark
Link
Edit
Steve,

Suppose there was a million dollars under my couch that I hadn't been aware of before. Wouldn't THAT be awesome? Show me the method and we will see.

Here:

http://rebirthofreason.com/Forum/Dissent/0268.shtml

Post 18

Friday, July 12, 2013 - 9:37pmSanction this postReply
Bookmark
Link
Edit
Ed,

Popper's views are very different than Hume and I'm not getting what your Adler stuff has to do with Popper.


Also I don't know why you file, "Big Foot doesn't exist." under "opinion" not "knowledge". I think it's knowledge and I think Objectivism would agree. Objectivism rejects the arbitrary. I think Peikoff would not give that statement the status "possible" (in his terminology).

Post 19

Friday, July 12, 2013 - 9:41pmSanction this postReply
Bookmark
Link
Edit
Steve,

I said: "Someone is researching X and stumbles across a relationship between entities that is unrelated to any problem or question he, or anyone in his field, is currently addressing."

You replied: "Why did he notice it at all? Why did it stand out to him? Why was it worth remembering, recording, etc? Because it does have some relevance to some problem of some interest to him."

No. He might have noticed it because he is a programmer and his mind has learned to spot patterns.


In other words, he's interested in certain types of patterns. They are relevant to his problems, his questions, his interests, his expectations, etc

I think you're just treating the word "problem" a lot more narrowly than I intend it.

There's always many patterns one could spot and one only pays attention to certain ones. Why those? In some broad sense those are the ones he cares about, the ones of note to him, the ones with some kind of relevance to some kind of problem he has.

None of us is perfect at knowing what to keep and what to reject at the first instance that we conceive of it.


I wasn't claiming instant perfection!


Post to this threadPage 0Page 1Forward one pageLast Page


User ID Password or create a free account.