I'm not sure how Rand or Peikoff deal with epistemological certainty.
Peikoff says that "certain knowledge" and "knowledge" are the same thing. And that it means knowledge must be conclusive. You have to reach certainty aka conclusiveness to achieve knowledge. (The first half of the essay is basically a summary of Peikoff written so that it'd also be acceptable to Popperians.)
This is a bit of an academic standard for knowledge. We can define 'knowledge' in this fashion, and I have no objections to that. But in practice we have to act, day in and day out, on 'knowledge' that is incomplete, unchecked, flawed to a minor degree, etc.
All the "in practice" stuff is dealt with by context. The standard of knowledge is not an impossible perfection, and the context of our knowledge is human life, in practice, day in and day out, etc...
But accepting contradictions or arbitrary exceptions is not practical. Ever.
If you understand two ideas contradict, don't act on both pragmatically, claiming perfection is not possible to man so it's OK. It's not OK. If you know of a contradiction you better figure out something better.
If two ideas contradict, don't just act on one either. Why that one, instead of the other? You must rationally address the contradiction. You must only act on ideas you don't already know are wrong. That's what rationality requires; rationality is absolute and uncompromising; there is no other way.
Someone is researching X and stumbles across a relationship between entities that is unrelated to any problem or question he, or anyone in his field, is currently addressing.
Why did he notice it at all? Why did it stand out to him? Why was it worth remembering, recording, etc? Because it does have some relevance to some problem of some interest to him.
Further, there is a statement you made that seems to restrict or redefine "context" - "It's still knowledge in its original, intended context."
This is Peikoff's idea, I wasn't really expecting objections... But I think it's true. The original value (knowledge) is still there, even later when we learn better. Or put my way: if idea A answers issue B, it still answers it even when we learn that C is a better issue and that A is inadequate to answer C.
You're disagreeing with Peikoff. I don't really care to argue the point currently. Except also you say, "What remained knowledge is that blood has compatibility issues and blood type is one of the compatibility factors." which seems more like you agree than disagree.
This might be useful as some part of a methodology, but I don't think it is the best approach.
So, let's step back and look at the bigger picture here. I came here saying that Popper and Objectivism are more compatible than people realize. And I'm told no no, that's wrong, etc (Well, OK, most of the shouting was at other Objectivist forums, but I still think disagreement was the general reaction here).
Then what happens? Well I write some stuff straight out of Peikoff which is Popper compatible. And what objection do I get? Peikoff is wrong.
One of the things I notice here is that if Peikoff is wrong that doesn't make Popper and Peikoff incompatible. Maybe they were both wrong together. And I agree with them, but you don't. That'd be kind of funny, right? Like, who is the outsider, now?
I wasn't really looking to defend Objectivism as part of the Popper conversation. Maybe I'll have to. Maybe, hopefully, someone else here could help me out and defend Peikoff for me? :) I'd like to focus on some other things currently.
Often we are left with conflicting conclusions and we need to act before we can successfully refute all but one.
Suppose for a moment there was a method by which we could always act on a single conclusion with no conflict. Would that be good? Would that be awesome? Would that be an improvement on Objectivist epistemology?
Now, if you say, "Yes, you have refuted everything else as being less than that jumbled up probability statement, so it is 'knowledge', then we agree.
I would have said "wrong" instead of "less than". But yes, if you see something wrong with each original statement, but nothing wrong with your new statement (whether it is a "blend" or not), then you have one non-refuted statement and no conflict.
2.) As long as we operate as rational beings, we have the option to thaw it out and re-examine it
If you can unfreeze, what meaning does "freeze" even have? (Certainly not ITOE's meaning where "frozen" is a bad and permanent thing. Which is why I chose the word at all, I thought if I used it the same way that Rand did that would be easier to understand than if I used some other word.)
But you have said that to be knowledge an idea must be absolute, certain, and conclusive. And we can't look into the future so we will never be able to tell if the idea won't require correction. And you don't believe we can determine truth of an idea without omniscience, which we don't have. So really, you are saying that we always act out of irrationality. I know that you don't intend to say that, but it is a conflict that exists in your statements.
No. Perfection or infallibility is not the standard of knowledge or rationality. Lacking those doesn't mean we're acting irrationally. What I'm hearing is you think that "absolute, "certain" and "conclusive" mean infallible. But that isn't how Peikoff/Objectivism means them. (This is one of the reasons I don't think they are very good terminology btw. Too open to misinterpretation as infallibilist.)
But, OK, if you think certainty/etc contradicts fallible rational knowledge, then what is your position? Do you reject certainty, or reject knowledge, or reject fallibility?
It is a misuse of the term irrationality to say that someone who reasons as the means of making a choice when faced with alternatives for which he has less than conclusive or final knowledge is being irrational.
I'm saying the correct method of reasoning will reach an idea with no contradictions, no conflicts, no known flaws. Anything less means you have multiple contradicting ideas, and you don't address the contradiction but act anyway in spite of it -- it means acting on an idea that you know is flawed, despite knowing it's flawed. It means going against your own mind (since you judge an idea is wrong, but act on it anyway).
I say that "weight of the evidence" is the setting of part of the overall context, and if it is right, then it is valid knowledge. I suspect that unless you agree with that, you'd have to also say that there is zero knowledge content in anything in statistics... if you want to remain consistent.
There's nothing wrong with statistics as such. But they aren't epistemology. They work fine in domains where they apply. And epistemology can deal with statistical theories just the same as with any other theories.
I agree with that statement. But it is in conflict with what you said earlier
What is the conflict?
You've created false alternatives. I can use weighted evidence and best survives criticism as methods for creating some kind of probability matrix to govern actions, to judge outcomes, to be able to act when needed, till I can replace one or the both of the conflicting ideas with one that is conclusive.
Sounds like you want to be a Bayesian or something. I'm well aware people think that kind of stuff works. I think it doesn't and have arguments. But first: Where, may I ask, is Objectivism's defense of this approach, and full explanation of how it can and does work?
Millions of choices relate to ideas that are so tiny in the scheme of things that it isn't worth the time to parse them out for a final conclusive winner.
Why are you willing to accept automatic lightning fast irrationality, but not automatic lightning fast rational conclusive thinking? If there's going to be an automatic light fast unconscious thought process -- ok no problem -- why think it's the wrong one? (Don't its many successes indicate its the right, rational one?)
I think the problem here, actually, is that you don't know how rational, conclusive thinking works. That's fine in that I didn't explain it yet. But you're assuming things about it, e.g. that it's unable to deal with limits on time and attention that some topics should get. But it has to deal with that -- and does. When you assume it doesn't you're really just assuming it's wrong and doesn't work -- rather than asking how it does work.
We have to 'weight' our use of limited resources, particularly time, to give no more nor less than adequate rational attention to the risk, reward, and certainty factors relative to the context needing an action.
If you do something along those lines in a particular case, and it is the right best thing to do, and you know it, then what contradicts it? What conflict remains? What isn't conclusive?