| | Mike,
I agree 100% that the geocentric model makes it impossible to accurately measure the orbits of Venus and Mars, for sure, and probably, the others as well, for all the difference it might make. But the measurement is still approxiomately achievable. The estimates made by the ancients of the size of the Moon and the Sun and the distances to them were also "wrong" but still very good.
But the Moon and Sun have to be viewed differently under the geocentic model. Under the geocentic model, it is possible to compute and relate the size of the Moon's orbit (around the Earth) with the size of the Sun's "orbit." This is because the radii from the Earth to the Moon and Sun follows a consistent pattern (can be "averaged"). But -- with real-time measurements -- the radii from the Earth to Mars and to Venus will not follow a consistent pattern; and therefore can only be "illegitimately" averaged.
For instance, let's say you took all of your measurements during the days, weeks, or months when Venus was at its greatest length away from Earth and, by happenstance, when Mars was mostly measured as being closest to Earth (i.e., in that closest part of its solar orbit) -- or vice versa. What you will find -- after possibly a months-long investigation of the matter -- is that the relative size of their orbits was either way too similar or way too different (as opposed to reality).
When you call that an "approximation," you do an injustice to the word. By that loose standard, humans are "approximately" the size of African bull elephants (when viewed from far away). It is no different than taking length measurements with a hypothetical ruler which, itself, changes in length. To call that "approximation" is a disservice to the term. :-)
In order to really compute and relate their orbits to one another -- to mitigate the confounding effect of taking measurements only from partial and possibly extreme sections of orbits -- you would have to continue to perform measurements for at least several months or years, if not for several decades (possibly exceeding the life expectancy of an adult human). It remains to be seen whether ancient investigators took measurements over a timespan which extended beyond their own lifetime -- where such approximations could be valid (non-accidental) ones.
But Descartes' theory was more correct than Newton's.
Newton's theory did not involve errors of postulation regarding the fundamental nature of light. Descartes' theory did. Also, Newton's theory successfully explained the relationship of colored light to white light. Decartes' theory did not. So ... how can you sit there and say Descartes' theory was more correct? Besides fundamental natures and relationships of white to colored, what do you bring to the table?
The scientific method ("hypothetico-deductive") is not what Harriman claims it is. ... Testable theories are the heart of the matter.
No. A truly scientific method is not the old, tired, worn-out, 5-step, "hypothetico-deductive" method of (1) Observe, (2) Guess, (3) Predict, (4) Test, (5) Repeat. You could even say that Harriman's whole book was about debunking that oft-hoodwinking canard. Conventional, scientific "hypothesis testing" literature is rife with gross inadequacies (1-9), given the epistemological ability of a philosophically-matured man empowered with an epistemology superior to what is currently in widespread professional use.
Inadequacies not unimportantly predicted by, and therefore altogether unsurprising to, Harriman and myself (and Peikoff, for that matter). Indiscriminate use/worship of the faulty "hypothetico-deductive" model has all but turned the scientific world upside-down and inside-out. The reason that, in the cases below, science is all but falling apart at the seams -- is because of the undeniable inferiority of the "hypothetico-deductive" method.
Ed
Reference: (1) [abstract] Hail the impossible: p-values, evidence, and likelihood.
First, p is uniformly distributed under the null hypothesis and can therefore never indicate evidence for the null. Second, p is conditioned solely on the null hypothesis and is therefore unsuited to quantify evidence, because evidence is always relative in the sense of being evidence for or against a hypothesis relative to another hypothesis. Third, p designates probability of obtaining evidence (given the null), rather than strength of evidence. Fourth, p depends on unobserved data and subjective intentions and therefore implies, given the evidential interpretation, that the evidential strength of observed data depends on things that did not happen and subjective intentions.
(2) [full study] Principled versus statistical thinking in diagnosis and treatment of stroke.
Evidence-based medicine must be liberated from bondage to probability-based statistics, which is founded on the notion of chance and random processes, and instead become established on the determinate processes of molecular biology, based on the universal principles of biological science.
(3) [abstract] Thinking about diagnostic thinking: a 30-year perspective.
... (c) to summarize criticisms of the hypothesis-testing model and to show how these led to greater emphasis on the role of clinical experience and prior knowledge in diagnostic reasoning; ...
(4) [abstract] Clinical trials are often false positive: a review of simple methods to control this problem.
Statistical hypothesis testing is much like gambling. If, with one statistical test, your chance of a significant result is 5%, then, after 20 tests, it will increase to 40%. This result is based on the play of chance.
(5) [abstract] Will the dilemma of evidence-based surgery ever be resolved?
The randomized controlled trial (RCT) is the most scientifically rigorous means of hypothesis testing in epidemiology. Discrepancies between established surgical and other interventions and best available evidence are common.
(6) [abstract] Level of evidence and therapeutic evaluation: need for more thoughts.
The first dimension deals with the design of the study, i.e. the extent to which bias is avoided or managed, the second with the quality of incorporated data. A third dimension specific to therapeutic evaluation focuses on the clinical relevance of the tested hypothesis. ...
The bulk of existing scales of level of evidence concentrate on methodology. Some may include the second dimension but none embrace the three of them. ...
Inconsistent existing scales prevent the emergence of a generally agreed standard. Therefore, there is a need to further specify the concept of level of evidence in therapy evaluation and design scales encompassing the three above-mentioned dimensions: methodology of experiment, quality of data, and clinical relevance of the primary criterion.
(7) [abstract] The mainstream hypothesis that LDL cholesterol drives atherosclerosis may have been falsified by non-invasive imaging of coronary artery plaque burden and progression.
That LDL cholesterol drives atherosclerosis is a widely if not almost universally held belief, and this belief strongly influences the mainstream approach to coronary heart disease. ...
... studies that address the efficacy of interventions and practices aimed at the primary prevention of heart disease almost always use event-based endpoints such as fatal or non-fatal myocardial infarction or unstable angina. These endpoints do not directly relate to the primary prevention of silent atherosclerosis and to apply these results to asymptomatic individuals in this context involves an extrapolation. ...
Consistent with earlier autopsy studies, the use of electron beam tomography and contrast enhanced CT angiography techniques have created a large body of evidence which appears to falsify this hypothesis. The large number of null results for the association between serum LDL cholesterol levels and the prevalence or progression of both calcified and non-calcified plaque in the appropriate vascular bed and involving large numbers of men and women over a wide range of age, ethnic background, plaque burden and cholesterol levels cannot be easily dismissed.
(8) [abstract] A practical solution to the pervasive problems of p values.
... p values are based on data that were never observed, and these hypothetical data are themselves influenced by subjective intentions. Moreover, p values do not quantify statistical evidence.
(9) [abstract] Prior convictions: Bayesian approaches to the analysis and interpretation of clinical megatrials.
Large, randomized clinical trials ("megatrials") are key drivers of modern cardiovascular practice, since they are cited frequently as the authoritative foundation for evidence-based management policies. Nevertheless, fundamental limitations in the conventional approach to statistical hypothesis testing undermine the scientific basis of the conclusions drawn from these trials.
(Edited by Ed Thompson on 2/09, 10:40pm)
|
|