A while back I had not read my email for a day or so and found several waiting in my ‘IN’ box. Two were from Perry. Oh no. What have I done now? In the halls of debate, I am not very house broken. Fearing a ‘please cease and desist’ is in store, I open one. To my startled surprise, Perry is offering me a byline and contributing privileges! Startled is an understatement. Apparently I am doing something that Perry actually wants to continue. But what?
I have one all encompassing principle. ‘Reality.’ This is a more complicated choice than it may first seem, but still an easy one..
There are very few guidelines for contributors to Samizdata. Basically, the content guidelines are simple. The key position statement is “liberty – good, big government – bad”. Surprisingly, this is the one I will need to be careful with. For it is possible within my principles, to hold a collectivist position that is both philosophically consistent and morally sound. But while I am acknowledging that a collectivist can be morally sound and philosophically consistent, I am also mustering my defences and preparing for a ‘debate’ that can only be resolved by physical contest. I have made my choice and there is no middle ground. Unlike many here, I do not believe morality is a continuum from collectivist – bad, to individualist – good. In my philosophy of morality, the middle ground is immoral. Relativism, subjectivism and pragmatism are my immoralities. Unprincipled decision making. Good morals are at my end of the spectrum. Evil morals are at the other end of the spectrum. But immorality is to be found in the middle. And to those on the other end of collectivist – individualist spectrum, I am the evil and they are the good. That is as it should be. Like matter and anti-matter, the legitimacy of each is not in question. But sustained contact is impossible.
I am not sure if this understanding of my outlook has been obvious. I have never explained it here to any extent, but this position has been the foundation of every stance I take. Though incomplete and sometimes misinformed, there is a coherent and consistent framework available for all of my rational decisions. With Perry’s generosity, I will lay it out for those of you who are doubtful or curious. Digging into the essence of many years spent winnowing philosophies and developing my own moral base, I will try to clean some of it up enough to present a little at a time.
Two absolute moralities exist, and they carry contrary and absolute moral imperatives.
I am and will continue to be amazed that I am being offered this forum. I would like to say that I will behave and conduct myself with restraint and deference, but I think if Perry had wanted me to change, he would have said so. Or more likely, not have made the offer. Feel free to offer advice. In addition to the two bases for morality, I will occasionally post conversation starters that tickle my interest. Sometimes truisms of the ‘why didn’t I think of that?’ sort. Certain topics, particularly actions that have irreparable consequences, will cause me to blow a gasket from time to time. And suggestions for any topics and themes you want me to pursue are much welcomed.
Your comments are always well argued, so a good choice methink. Congratulations on the “promotion”.
I’m sure Perry’s offer of a by-line to me must be stuck in my spam-filter or some such oversight 🙂
Heh, believe that if you can.
Congratulations dude. Looking forward to reading your thoughts without always clicking the “comments” button!
Hearty congratulations my friend!!!
You Hamlet
Me Yorrick
Ah well!! : )
Does the middle ground contain immorality or amorality?
What you are “allowed” here seems to be what you can get away with, without getting defeated in argument or shown to be wrong, which is as it should be. The ‘natural selection’ of memes.
Congratulations on the promotion, and don’t bother with the “restraint and deference” unless you feel at the time it is the right thing to do. If you take a wrong step, I am sure you will be told!
Thanks all.
Pa A, I dconsidered that point and chose to use “immoral” in its meaning of against moral principles, and “amoral” in its first sense, neither moral nor immoral.
Midwesterner,
Let me join the throng! Congrats on getting posting privileges you lucky sod!
At the risk of sounding like I’m damning with faint praise (which I’m not) for a long while you have been a welcome voice of reason, moderation and reflection on this blog. Some of us have a hair-trigger response over the “Post” button but you’ve always thought it through.
You were one hell of a relief to the Verity-era.
I look forward to you opening many threads with your thoughtful, closely reasoned and deeply rational approach.
Reason? Moderation!? REFLECTION!
Oh my! Where, oh where did I go wrong? Hopefully I can pop an irrational steam valve on at least a couple of occasions.
One of my favorite characters on the original Saturday Night Live was Emily Litella. It seems to happen to all contributors sooner or later, I wait with trepidation for the day I must amend a post and say “Nevermind.”
I look forward to hearing more from you.
“Liberty Good! Big Government Bad!”
Indeed! Watching the Big Governments of the US and UK trying to force Liberty upon Iraq at the point of a bayonet, in the guise of some absurd concept called the “War on Terror” proves the wisdom of that simple statement beyond any question.
Congrats MidWest!
I enjoy your comments. Look forward to reading your posts.
Welcome aboard Midwesterner. We have occasionally disagreed on this boards but you have been consistently civil and have good points to make. Remember to have fun!
Congratulations Midwestener. Your rationality and calm analysis are a delight, and it’s good to see it rewarded.
But lets not hang around. To work!
You wrote:
I find:
[All from Collins English Dictionary and Thesaurus, 2000 (reprinted 2006). I’m less keen on the last one (too many words), but consistency holds that I should stick with it.]
These are words with meanings. Without substantial agreement on meanings, we are lost in rational argument (or agreement) with each other. And I’m going to go with the given meanings rather than emote over love or hate of the words themselves.
Now, I’m with you on subjectivism; this is because its definition implies a totality of view: that there is no single absolute moral value at all. Total rubbish.
Relativism, I’m less sure about, especially on aesthetics and a fair bit on morals. I might give you truth, in a strict sense.
On these two philosophies, such agreement as we might have on where to draw the line of disapproval must be set somewhere in between: perhaps there is a “middle ground” that does not compromise your (other) principles.
Moving on to pragmatism, is not this dual definition effectively that of Karl Popper’s “The Logic of Scientific Discovery”. And surely this philosophy (if one holds with it, as I do) extends to the whole of our physical existence (ie excluding religion and other faith-based and aesthetic issues). A valid theory is surely, according to Popper, just a concise statement of what happens in practice.
I am particularly protective of “pragmatism” as a perfectly acceptable thing. In so far as you object to it, I suspect it is because the word is sometimes misused in support of arguments that do not hold water, but do require consideration of grey and difficult issues.
My personal philosophy on grey issues is based around Bayesian Risk Analysis, which could be viewed as the mathematics of decision-making in the middle ground. Doubtless it will come up again on some of your later postings.
The issue here might seem one of linguistic pedantry and, taken to extreme, perhaps it is. However, I see the danger that you have taken 3 words that have (in certain uses such as perhaps the earlier work of William James, so I surmise) much to be said against them. Such things may be bad, but you have taken each word with more general meaning and redefined is as meaning only the unloved sub-definition. On such a basis are built many false but seeming attractive arguments. I don’t think that’s really your style, but …
Now, as to the badness of the middle ground, the issue is, I think, the same. You’ve heard the phrase in these arguments of false compromise: and you don’t like it. Or rather we both don’t like the arguments and you have labelled them with the phrase. Well, I think the middle ground is a good place to be, when dealing with two absolutist proponents and when the “truth” lies between them: in actuality as well as in seeking of compromise agreement. It might even be a step along the way in obtaining agreement, when one of the absolutists is right and the other wrong: though it is not usually my way. The problem with that way is that, if one stops halfway, an injustice is done: to the truth and to the correct absolutist. And, of course, some use such dislike of the middle ground as an argument, such as: “Either you’re with me or against me!”
None of this stops it being wrong to insist on a general case against the middle ground, or against relativism.
As to pragmatism, the Bayesian in me induces the idea that your approach to moral philosophy is just heuristic. Heuristics are useful; they can get you to the optimal solution sooner, though (on their own) perhaps only nearby and (more rarely) way off. But heuristics are not a substitute for either the right objective function for a particular case, not for its optimisation.
“Enough! Enough!” I hear, and it is.
Congratulations again, and best regards
Except that is not what they are doing at all. If they were, I really do not see the problem. Liberty is always won at bayonet point at some point in the process. What they are doing is foisting the War on Drugs on Afghanistan and ‘Democracy’ on Iraq, which is not liberty at all, though clearly both places are better off without the Taliban and Ba’athism.
Nigel wrote:
I reply: “Ooooooh! My BRAIN hurts…”
Indeed, I’m delighted that you are going to contribute more formally to this e-publication, a site that I now regard as a must-read along with the Dilblog & David Copperfield.
Heh, believe that if you can.
Indeed. Wait until you receive one of the fearsome “editorial spankings” that come down from up high. You will be reduced a a quivering, whimpering mass of nerves (like the rest of us) in no time at all.
Michael, (and Adriana)
I expect to receive many despite my best efforts. But having been brought up in attempted strict adherence to a book of guidelines that contains over 1000 closely spaced double column pages (in the King James version), making diligent but often futile attempts to understand and obey instruction is a state I’ve known all too well.
That and having two older sisters. Anyone with older sisters knows the fate of those answering to benevolent despots. 🙂
I shall do my very best to avoid opprobrium.
Congratulations Midwesterner. I look forward to reading your contributions.
Nigel: Huh? *as brain dribbles out of ears*
Congratulations. Just don’t start another abortion thread for a long time.
Nigel,
Absolutely. I’m relieved that things are starting with concern for what words mean. Unfortunately, once a word is coined it begins to acquire more and more meanings until it becomes either useless or merely emotive. It becomes essential to specify a narrower meaning or coin a new word.
Specifically on “pragmatic”, it is a word that is used in two environments. In ordinary and popular usage, it means to break one value to achieve a ‘higher’ one. For example, paying a bribe to build a road.
In philosophy it has a myriad of meanings. Some are sound and I can agree with parts of them. But Wikipedia summerizes the problem in this phrase –
If that doesn’t make you want to grab a bucket and start heaving …
Since we have a very general readership, I am using this meaning and the general usage one.
Well, that is what this is all about isn’t it. You’ve made your case and, as time permits me, I will make mine.
Marvellous. Welcome aboard, Midwesterner. I’ve much enjoyed your input on previous comments threads, and hopefully you’ll take the heat off we (temporarily) Uninspired Samizdatistas.
Did I say that? Oops!
“it means to break one value to achieve a ‘higher’ one. For example…,” deliberately neglecting measures effective for the prevention of crime in order to preserve privacy and civil liberties.
Value systems that never internally conflict are either empty or convoluted, and I suspect we are all pragmatists in the given sense. The argument (in my opinion) is not usually that one value should never overide another, but that people get the order wrong – placing short-term interests (having a road, being seen to be taking action on crime) ahead of long-term ones (preventing a corrupt society, civil liberties). Given that people apply the word ‘pragmatism’ to these sort of situations, to define it as you have done above buys into the idea that the short-term values are higher, and thus misidentifies the problem; you end up thinking the problem is that you’re weighing values against each other, rather than it being that you are applying the wrong weights.
I suspect the Wikipedia discussion is not directed at moral beliefs so much as it is at idealisations. In mathematics and physics we make use of many statements and assumptions which are, technically, known to be false. Frictionless planes, rigid bodies, Newton’s law of gravitation… . They’re wrong, and we know they’re wrong, but we assume them to be true because it makes it easier to do the maths if you do. When you look at a clock to tell the time, you do not apply relativisitic adjustments and the propagation delay for the light to travel from the clock to you – you believe that when you look at a clock the time you see is the time, and that the time is the same everywhere. It is simply not useful to think of it in those terms. These are extreme examples, but the same applies to many other beliefs we have, it is just that scientists have taken care to explore and analyse these questions explicitly. All beliefs are approximate in this sense, and some are simply more approximate than others. I have to say, I wouldn’t have used the words ‘true’ and ‘false’ in this context, at least, not without explaining that they are referring to degrees of accuracy in representing reality, and that ‘true’ in this case meant a sufficiently accurate representation of reality for practical purposes.
Of course, if you believe moral beliefs are objective reflections of a moral reality, then they would probably also be approximations limited by our understanding and judgement (although we might believe otherwise). Hopefully they would be expressed accurately enough for our practical purpose of making judgements in everyday circumstances, but we could never be sure we knew what the true morally right course would be. That doesn’t mean that the actual objective truth of moral questions is dependent on their usefulness, but the beliefs we hold about them are only the shadows on the wall of Plato’s cave.
Welcome, Midwesterner.
Perry,
[B]oth places are better off without the Taliban and Ba’athism – well other things being equal, they would be, of course. But they aren’t. I’m fairly sure Afghanistan is better off without the Taliban; but it looks to me like Iraq, save for Kurdistan, is currently worse off.
“…but it looks to me like Iraq, save for Kurdistan, is currently worse off.”
Depends on whether you think the Iraqis would be better off trading liberty for some temporary security… 🙂
(I gather from other sources that they’re doing a lot better in Iraq than the MSM would like you to believe, but I’m not going to get dragged into an argument substantiating that, so you can take it as read that you’ve won that one if you like.)
And Germany was worse off in 1946 than it had been under Hitler in 1938. Unless you were one of the few surviving Jews in Germany, of course.
Nigel,
You do realize that Popper was decisively non-Bayesian, don’t you? The inherent flaws of Bayesian methodology are at the root of his case.
How do these two statements square on the need for agreed upon meanings? Are we to accept all meanings and of necessity discard the word as useless?
So when some cars on a road are absolutely detemined to drive west, and some cars are absolutely determined to drive east, the middle ground is a “good place to be“? Or perhaps you are advocating for immobility? Perpetuation of the status quo? Contrary intentions must always result in stasis.
Did I get this right? Did you just say (in a round about way) that it is usually your way for right and wrong to seek a compromise?
Yes, I did. You just acknowledged its wrongness, but held the position. You acknowledge the existence of right and wrong, but believe they should seek a compromise.
I am not “buying into” an idea. It is a generally accepted common usage of “pragmatic“. Our entire political system is founded on the idea that short term values are higher. It is not a problem to be resolved by recalculating the weights of values, but rather to stop forcing holders of entirely different values of “good” to agree on the means of achieving “good“. The reason you find value systems to be, of necessity either “internally conflicted“, “empty” or “convoluted” is because you are trying to reconcile contradictory values.
Regarding your last two paragraphs, it’s called “rounding“. We round to the nearest useful answer. This is NOT relativism. This is rounding. Relativism is the belief that there are no absolute truths.
Reality itself “is“. No more need be said of it. “Morality” in my arguments is the degree of coherence actions re intentions have with the laws of reality. This means that differing goals have differing moralities.
I agree with most of that (the bits of it I understood, anyway).
I wasn’t trying to say that you bought into it, but that accepting the common usage did so. “Generally accepted” it might be, but that is the root of the problem. All I can say is that I personally don’t accept it. It seems to be begging the question.
I’m not entirely sure what you mean by “holders of entirely different values”. Do you mean individuals each of who has multiple values, or several people, each having their own consistent set of values distinct from those of the others? I was talking about the first case, but I’m not sure if you mean the second. If you mean the second, then I would agree with the statement.
I agree that what I was talking about in the last two paragraphs is not relativism, which was sort of my point. The Wikipedia discussion is not promoting relativism, saying that there is no objective truth, but instead pointing out that we have no direct access to objective truth, and that our beliefs are approximations that are valued according to their usefulness. The philosophical pragmatist argues that because we have no means of determining objective truth and falsity, that the way we use the words when applied to beliefs has to be only in the latter sense – it is unprovable conjecture that our beliefs are actually related to ‘truth’ in the sense of being objective reality. The discussion only ‘relativises’ our beliefs about truths, not the truths themselves. The Wikipedia quote doesn’t explain it very well.
In the last paragraph, I’m afraid I couldn’t follow your definition of morality at all, although it looks interesting. I’m not sure what you mean by ‘laws of reality’; do you mean moral laws? Or physics? I don’t know how you judge ‘coherence’, and I don’t understand whether the differing moralities attached to the different goals are different moral systems, or different moral judgements, or whether you are saying that morality is multi-valued: that a goal can be both good and bad simultaneously, depending on how you look at it. That doesn’t seem to fit with what you say above, but I don’t see how else to interpret it.
Darn, this is getting complicated. I don’t want to scare everyone away with my first contribution.
There are two kinds of values. I’ll call them ‘goal’ values, and ‘means’ values. To make a deliberately uncharged example, if ‘sweet’ is a goal value, then ‘add sugar’ is a means value. In light of the goal of ‘sweet’, adding sugar is the morally right thing to do.
Another person may value ‘dry’ in the sense of not sweet. For them, actions to remove the sugar are the morally right thing to do.
But these two moralities cannot be reconciled. They must act on different targets. We here at Samizdata hold goal values that some others do not share. Reconciliation of our actions with theirs is not possible.
Re the Wikipedia discussion –
This is one of the areas where this flavor of pragmatism falls down. If our beliefs have no direct basis in objective truth, then how can their “usefulness“? Relativism seems to me to be a way of denying objective knowledge of facts, but retaining objective authority by linking it to something else that does not have direct access to objective truth. I’m having a hard time seeing how this is not a denial of objective reality through a two step process.
Popper went into “naive falsification”, falsifying statements versus falsifying theories. Without having given it any prior thought, I suppose that the validity of our perceptions must be factored into the theory and their usefulness is part of the validating of theories.
Err…
… part of the falsification testing of theories.
Well done Midwesterner. At the risk or repetition, I have enjoyed your comments as well. Plus a shift toward Yankee domination — this is a great day.
I just banged my knee on the kitchen table.
It hurt.
Can I have my Doctorate now,
or will you post it? : )
The table hurt? How could you tell?
Or is that one of the objective facts that you can’t have direct access to?
It’s a talking one, one of the props from Fantasia
I bought on ebay.
It said I have very boney knees and could do with putting on weight. : )
Ah, right. Understood.
On the first part, I agree absolutely.
On the second part, what I said was that we have no direct access. We do, however, have indirect access through experience, experiment, and the way our brains work. There is a connection between reality and our beliefs, but it is an imperfect channel and fallible. It is Plato’s cave.
A common error made by relativists (and others) is to fail to distinguish philosophical falsity from practical-purposes just plain wrong. Kepler made observations on the positions of the planets, from which Newton derived his law of gravitation. Newton’s gravity is a belief, connected to reality via the channel of observation. It is objectively true in the sense that where it says the planets will be is almost exactly where it turns out they are – it is useful for prediction. You cannot give Newtonian gravity and the Ancient Greek religious theory of the Gods riding across the sky equal status as to being truth. Telescopic observation of the Sun totally fails to reveal Helios and his bull-drawn chariot. 🙂
Nevertheless, we now know that Newton’s theory is less true than General Relativity. If you insist on black and white definitions, it is quite definitely false; and yet I would suggest that there is a pragmatic truth in saying that gravity operates according to an inverse square law, and would cheerfully fly in a spaceship guided on that basis without worrying that the universe would disagree with my beliefs.
And while we know that General Relativity is better, it also is a reflection in the imperfect mirror of observation, and is believed to be likewise false in an absolute objective sense. When I say we have no direct access to truth, what I mean is that we can never find out how gravity really works, and know that it must be so; we can only ever develop greater confidence in ever more practically accurate beliefs. The only sense in which we can call a theory of gravity “the truth” is whether it can be used to correctly predict the motions of the planets. That doesn’t mean there isn’t an objective truth as to how gravity works, or that all beliefs are equally valid, or that our beliefs have no connection to reality. (Even Helios and his bulls were descriptive of how the real world behaved.)
My apologies for driving things so far off-topic: relativism to relativity is quite a big step. 🙂 My point is that the philosophers are talking about an abstruse interpretation of truth of no practical relevance to the politics. For moral relativists to make use of this scientific subtlety to try to force through their moral equivalence (and I have heard the argument put forward in any number of places) is a gross abuse of the concept; philosophical pragmatism intends no such meaning.
PS. I don’t think we’ll scare anyone away. Hopefully they’ll just put it down to “Pa talking nonsense, again” and ignore it.
I’ll look forward to hearing your thoughts on the badness of the middle ground – if time permits.
Congrats, Midwesterner. I’ve always enjoyed your contributions and you richly deserve to be elevated to the Samizdata cardinalate.
I too fear the Samizdata smite moreso than a ruler across the knuckles by a nun. The graphic is fearsome.
Don’t hold back, though. It is easier to ask forgiveness than permission. Ah, there’s something useful from my Catholic formation. LOL
With a Wisconsinite contributor once again, for the first time since Robert Clayton Dean moved to warmer climes, perhaps we can see posts like:
I suppose Jim Doyle never had much a chance with people like me, in that I hated him almost before he was born.
RAB, well, if you or the table hurt too much, you should have a doctorate check it out. He’ll probably tell you to take two aspersions and call the orpheus in the mourning. Or not. Maybe he’ll make a lyre out of me. If I get him upset, he just might give me a good Thracian. Whoa, I seem to be having a bad spell. One with classical symptoms.
Triticale,
I promise to not start releasing diatribes on the role of government employee (and other) unions in Wisconsin politics. Or maybe I will. Bought, paid for and delivered is the thought that comes first to mind. My uncle was in UAW. He was helpless and furious with the way the union was spending his membership dues. In a closed shop, of course.
K L,
The graphic inspires reverential awe. I half expect Charlton Heston to materialize. Not sure if he would smite with a cane or a Glock.
Looking forward to it, congrats!
Now, let the games begin 🙂
Pa, respectfully, I think you’ve lost track of the discussion. We are not discussing truth. We are discussing the process of discovering truth. You just introduced a couple of millenia and several cultures worth of acquired knowledge into the comparison without hardly batting an eye.
Bearing that in mind, it is quite likely that Gods in the sky and Newtonian gravity had equal merit with respect to the body of knowledge available at their respective times. Gods in the sky was simply a theory that was replaced by better ones. Many of them in fact before Newtonian gravity came along. And as you point out, more did and will come after Newtonian gravity.
Interjection, am I right in assuming your willingness to take a space flight guided by Newtonian methods has a lot to do with the speed of the craft? Would you be so willing as speeds approach the speed of light?
A big problem I have with your way of reasoning is that you condition on facts, but at the same time admit your perceptions of them are only perceptions. And yet on the other side of the statement, you also have uncertainty in the form of hypotheses.
I prefer to recognize the ultimate truth of the facts, whatever they may be, and accept that the theory and its perception methodology may have flaws. When you condition on (potential flawed) perceptions to place probabilities on almost certainly flawed hypotheses, you don’t have any certain statements possible. When I put reality up against a hypothesis with its observation methods, at least some certain statements are possible.
Example A: That data points ABC which probably equal 123 do so because hypothesis Alpha Beta are true with 30 and 20 % probabilities, respectively. Not falsifiable.
Example B: That this hypothesis will observe data points ABC to equal 123. This one is clearly a meaningful statement in that it is readily falsifiable.
Conditioning on the data is only ever capable of making probabilistic speculations. Oft times that is very useful. But positive statements of knowledge are impossible.
I’m not saying this well. I’m probably using terms wrongly. (Wrongly?) Does it help if I say that observation methodology is inherently part of the hypothesis. And by conditioning on the data you are in effect putting observational uncertainty on both the data and the hypothesis. This is what makes it impossible to have a meaningful statement conditioned on the data. I think.
Bedtime. I’m done. Got to go get firewood, feed the stove and so, to bed.
Excellent! So we are discussing the process of discovering truth. What process is that, then, exactly? Because the process we have followed has led us from Gods in the sky to General Relativity, and we still have not discovered truth – unless you can say there is “more” truth in Relativity than Gods; as if truth could be measured on a continuum. The process discovers ever more accurate approximations – more useful beliefs – but to equate the process of discovering useful beliefs to the discovery of truth is pragmatism precisely. I suspect I’m agreeing with you again, but am unsure.
Interjection: as it happens the speed compared to c makes little difference, but to answer in the spirit it was asked, yes, my confidence relies on the weakness of gravity in this neighbourhood, and I would be less confident orbiting a neutron star or black hole.
I’m not sure what you’re getting at with the A and B examples, because you appear to be varying two things at the same time. On the one hand you have the question of whether you are testing a theory or testing a prediction. On the other, you have a probabilistic versus a deterministic prediction.
To take the first option first, if you correctly predict the position of Mercury at a particular date and time, and ask whether it is because this or that theory of gravity is true, you cannot give an answer. Correct. If two hypotheses make the same prediction, experiment gives no information on which of them is true. You can only distinguish them observationally where they make different predictions. However, that doesn’t make the theories unfalsifiable – if you observe Mercury somewhere else, both theories are found false.
Evidence only tests predictive power, which is a vital property of a theory but not the only one: we also have properties like simplicity, elegance, and generality. Falsifiability and experiment have nothing to say on these topics – you cannot distinguish (A) ‘inverse square law’ from (B) ‘inverse square law except on the 1st April 2635’ by experiment, or at least, not for another 630 years, but there are clear reasons for preferring theory (A) even without any evidence to distinguish it. I also believe that we can be certain in our preference, (where certainty is interpreted with the usual for-all-practical-purposes caveats) although not for evidential reasons.
On to the second option: whether probabilistic predictions are falsifiable. If two hypotheses predict different probabilities for the range of possible events, then they can be distinguished. If you see a particular outcome, then your belief in each hypothesis is adjusted according to the ratio of the independent probabilities of that event implied by each hypothesis. If they both offer the same probability, the ratio is 1 and there is no change to the belief. If one predicts the event with ten times the probability of the other, then seeing that outcome favours the one hypothesis over the other. Observing a sufficient number of sufficiently large ratios can make one hypothesis ‘certain’ (usual caveats) and the other falsified. The fact that a theory’s predictions are probabilistic makes no difference to its falsifiability – if the ratios are not sufficiently high, you just need to wait for more evidence.
Positive statements of knowledge about the world are definitely possible, so long as you understand ‘knowledge’ to mean ‘believed with a high for-all-practical-purposes probability’. The only place you can get better is in speaking of the mental world, where statements can sometimes be true by definition. The way people think is approximate; you have to be a little bit approximate when you come to judge their beliefs.
And Germany was worse off in 1946 than it had been under Hitler in 1938.
Except carpet bombing cities like what was done to Germany is incompatible w/ the “you don’t care for the Iraqis like I do” argument you made in favor of your war. This claim of yours, that you get to decide for the Iraqis that anything is better than the defanged Saddam they had at the start of this is simply a refusal to accept any responsibility for any bad outcome of your war.
BTW, Perry, any comments on the Iraq report and the new Sec Def’s admission we aren’t winning in Iraq?
Pa,
An ever expanding realm of unbroken hypotheses. Maybe to culminate in an unbroken grand unified theory.
Yes. I do not equate the search for truth with a test of usefulness. Don’t get me wrong, getting closer to truth can sometimes be very useful. But neither truth is nor usefulness derive from each other. Totally false ideas (Gods in the sky) can be very useful for predicting.
My mistake. I was assuming the exploration of interestion places. Nothing so useful as “this neighbourhood”. Your point that Newtonian gravity is “useful” is well taken. My point is that further exploration up to our limits requires knowledge at our limits. This is where a search for truth has it all over a search for usefulness.
re the examples. I think non-verbally. What some call thinking in pictures. (Much more to it than that, though.) I have to put them in words and, to borrow from Winnie the Poo, “It muddles me rather.” And I’m having to learn terms that I don’t know in order to communicate. I’ll try.
Actually, they do. Complexity, ackwardness and specificity are just limitations on the scope (predictive potential) of the statement. They are restrictions on what can be falsified. The less qualifications in a statement the greater, objectively, its potential, and subjectively, its beauty.
But that is wrong. Positive and Probable are not synonyms. It is possible to make a statement that “this act will yield this result“. The hypothesis that “if one looks, one will see white swans”, will be proven if one indeed observes white swans. Or, conversely one can hypothesise “if one looks, one will never see a white swan” and break the hypothesis by observing a black swan. The key here is understanding that the process and uncertainties of observation are part of the hypothesis.
By putting the observation process with the hypothesis on one side of the reality/perception divide, we can make a positive statement. But your method of conditioning on the data puts an uncertainty introducing factor (the observation process) on the reality side of the divide. It can make reality (which is certain) to appear to be uncertain.
I hope this comment makes sense. I’ve been unable to get more than a very few consecutive minutes to think today. A perpetual stream of interupting conversations. Not enough time to think clearly.
Who me,
Your questions assume a lot of background that I wouldn’t accept as true.
I don’t think anyone here has made a “you don’t care for the Iraqis like I do” argument, unless you are referring to a discussion elsewhere (refs?), and not everybody here was or is in favour of the war. I don’t see why carpet bombing cities is necessarily incompatible with it, even if that had been the argument – if, for example, there was no other way to liberate any of the German people from the Nazis. (It is not an argument I make, I just note that the fact it could be made means the concepts are not incompatible. The argument has been made that Hiroshima and Nagasaki ultimately saved Japanese lives too.) Neither was liberating people from genocidal dictators the only or indeed the main argument for either war. We wouldn’t claim to decide anything for the Iraqis, they’re perfectly capable of deciding it for themselves, and I lot of them are extremely angry and bewildered that anyone could think they were better off under Saddam. Nobody would claim that anything was better, but I don’t accept that the current state of Iraq is exceptional. I don’t accept that Saddam was “defanged” – from the point of view of the residents in Iraq he certainly wasn’t, and nor do I accept that he was defanged in the international military/political arena either. We don’t refuse to accept responsibility for what we have done – we’re quite proud of it, in fact – but nor do we accept that actions taken by other players in Iraq are our responsibility too. That is like blaming Churchill for the London Blitz. Nor do I accept that the outcome is bad – we’re not there yet, and don’t know what the outcome will be, but so far it is going a lot better than I expected. But even if it was going wrong, I wouldn’t see that as a reason to regret having tried, nor necessarily better than having let things continue on the track they were following.
The Iraq report is garbage, and whether we’re winning depends on what you think we’re trying to do. In a sense, I’d say we aren’t winning at home, because people like you are putting up this tremendous struggle to defeat our will to persevere and make us retreat. Iraq is only round 1, and the Islamists and supportive Socialists are hoping to scare us into not resisting them any further, so they can continue their plan to take over the world. We’re winning in Iraq and losing in Europe and America, because the lies and propaganda spread by our enemies are where the real fight is.
Liberty has to be fought for, and the war has not yet been won. With their promotion of moral equivalence and relative morality the enemies of freedom have us fighting ourselves; have us believing we are no better than they, that our governments are no better than their dictators, that our history is bloodier and our empires more rapacious and our history more shameful. That it is we who are intolerant and that any criticism of their abuses is motivated by racism and cultural arrogance, that to fight for what we believe in is unprovoked aggression.
It is 1938 again and national socialists are demanding that we hand over Czeckoslovakia (only it’s now Israel), the Jews are threatened with extermination, and the Soviets seem to be playing on both sides again. And there are still people calling for “peace in our time”. Will we never learn?
Mid,
An unbroken string of hypotheses, yes, but can any of them be known to be objectively true? How can we know that any of our beliefs are or will be other than Gods in the sky?
I agree on exploring the limits. I agree on the relevance of generality, simplicity and beauty, although I still think their relevance is not based on experimental evidence.
Are you trying to restrict science to only deterministic, completely predictable results? Philosophers usually argue about the colour of crows rather than swans – I assume that was some sort of reference to Hempel’s Paradox? – but the hypothesis “if one looks, you will never see a unicorn” is famously hard to refute. Never is a long time. It sound to me like you are saying the only certain knowledge is past observation. Raw factual predictions have not yet been tested, and so cannot be falsified until they are no longer predictions. It is the theory that makes the prediction you have to test if you want to get anywhere. The theory that “all swans are white” is tested by looking at swans (or, arguably, non-white objects) but is statistical. The theory that “this swan is white”, once confirmed by observation, is far more certain; but it lacks generality.
I hope I didn’t say reality wasn’t certain. (I didn’t even mention Quantum mechanics! 😉 ) What I was taking about was our beliefs about reality, which only ever achieve the for-all-practical-purposes variety of certainty. Even direct observation can be wrong, as a study of optical illusions demonstrates, and humans are very good at fooling themselves in other ways too.
In order to know how to go on, I need to know if you followed my placement of the uncertainty of observation in the hypothesis, not on the data.
My point being that if one accepts that reality is certain, then to leave it that certainty requires agknowledging the uncertainty of observation. If reality and perception are viewed as being different things that together enable a positive statement, then the uncertainty of observation must be placed in the hypothesis. The only alternative is to treat our observations as infalible or to treat reality as uncertain.
I’m having trouble saying this with words so they more you tell me what you think I’m saying, the better I understanding how it sounds.
The uncertainty of observation can certainly be put into the hypothesis, if one so wishes. A third place you could put it is in the testing of the hypothesis. This is like the difference between a hypothesis “it has been shown to be more than 99% certain that all swans are white” which has expressed the uncertainty in the hypothesis and the statement itself can be proved absolutely 100%, and “all swans are white” which may be proved to be more than 99% certain, but never achieves 100%.
It isn’t usual to put the uncertainty in the hypothesis like that – once it’s been demonstrated it isn’t usually regarded as a hypothesis any more – but there’s no logical reason not to.
I don’t have a problem with that. Is this what you mean?
I don’t understand the third place. The two places I am refering to are (objective) reality and (subjective) thought. Perception is the lens between them. (I may be using perception in a different way than I did earlier. I’m trying to be more specific.)
I view the transition as a lens of perception through which we look to see if the hypothesis passes the false/not false test. Since we are on the hypothesis side of the perception lens, we cannot condition on the other side because we can only ever have certain knowledge of our perceptions.
I’m trying, really I am. But I may be confusing things by trying to translate them.
I would agree with the way you put it in your last comment.
We have reality, we have the hypothesis/theory/belief which aspires to represent it, and we have the gates of perception through which we look to see if the belief passes the test – three parts. I was proposing that we ascribe the uncertainty to the perception process, and keep perception and belief separate. This means the belief about the world will always be uncertain, because we only have access to the world via perception. But it is also possible to lump perception and belief together, which enables you to make definitive statements know to be true. Instead of making statements about the world, you are now making statements about your perception of the world. By incorporating the uncertainty of perception into the hypothesis, rather than having it sit separately between reality and the hypothesis, it is possible to make statements of which one can be absolutely certain. (Unless you are a solipsist, but I regard that as not to be taken seriously.)
You can have either
Reality -> perception -> hypothesis
certain…..uncertainty…..uncertain
or
Reality -> (perception -> hypothesis)
…………..(uncertainty……uncertain)
certain………………..certain……………
(I’d like to have done a single set of brackets around both lines, the dots are necessary for spacing because excess whitespace gets stripped out – hopefully you get the idea.)
The second diagram was what I thought you were getting at, in saying you wanted to include the uncertainty in the hypothesis in order to make certain statements. What I was saying with my third place is that you can put it external to the hypothesis, in the perception and resultant degree of belief in the hypothesis, without at any time doubting the certainty of reality: as in the first diagram above.
Yes. That is what I said. Now I have to take a few minutes and translate it back into the way I think to make sure I said what I meant.
Is there a symbol or word for “conditioned on”? If so, please substitute it for the “+” sign.
Conditioning on the facts
Reality -> perception -> hypotheses (not certain)
…fact…….uncertainty…..uncertain facts
so
uncertain hypotheses (they’re not ‘proven’ yet)
+
uncertain facts
=
no certain statement possible
or
Conditioning on a single hypothesis
Reality -> (perception -> hypothesis)
…………..(uncertainty……uncertain)
certain………………..certain……………
so
uncertain hypothesis (including perception)
+
certain facts
=
certain statement
Hhmmm?
Yes, I think so.
The terminology looks a little odd to me, and I might be misinterpreting, but if it means the same as what you said before, then yes.
I think the reason I am still a little uncomfortable with it is that we have two dividers to reconcile. (Or recategorize one)
“perception” as a lens between (objective) fact and (subjective) hypothesis
and
“conditioned on” as a bridge between (objective) facts and (subjective) hypothesis
It seems to me that to be valid, “perception” and “conditioned on” are the two directions through the same lens. At least they are definitely sitting in the same place.
I’m needing to think on this. But at least now, for the first time today, I can have an hour or so uninterrupted to focus on this.
The terminology make look a little odd to you, it looks very odd to me. But, it works …
Few! Glad we cleared that one up.
Mine round then I think.
What you all having….
Hi RAB,
I’ll have a good strong stout. Blacker than yesterday’s coffee and twice as strong.
Pa A,
I came up with the following way of showing that you cannot possibly make a certain statement if you first condition the hypothesis on the data but that you can if you condition on the hypothesis alone.
To condition on the data, first you observe data, then you use those observations to make a hypothesis, use that hypothesis to make a prediction, then you test that prediction.
Therefore, using D for real data, O for a perception of data, H for a hypothesis, P for prediction, R for result, and L1, L2 etc for uncertainty for each pass through the ‘lens’ of perception, it would look like the following (Please ignore the dot spacers) –
D + L1 = O
O ->H -> P
D + L2 = R
D + L1 -> H . . P
—————– = —
D + L2 . . . . . . R
To be true, P must equal R, therefore, H must be true and L1 must equal L2. Likely, but not provable.
It only works when L1 and L2 etc are assumed to be the same. But since that cannot be known precisely or with certainty, it can only be estimated as a probability.
To condition on the hypothesis, make a theory, use that theory to make a prediction, and test that prediction.
H -> P
D + L1 = O
To be true, P must equal D + L1. Easily possible.
Notice that just by eliminating the step of conditioning the hypothesis on the data, the problem goes away. Because then you are conditioning on the hypothesis.
I don’t know if there’s enough here to convey what I’m trying to say. Did this help or just make matters worse? And this isn’t the only part of the practice that I have a problem with. The biggest concern I have is that P is always conditioned on O, therefore, except for a failed hypothesis, R is also conditioned on O. This could explain why our greatest discoveries seem to always result from mistakes. 🙂
Midwesterner: I only have one older sister, but I definitely understand.
Mid,
I have a feeling that I’m about to give an impromptu lecture on probability, statistics, and hypothesis testing. This is so far off the normal byways of this site, and is likely to be so big, I’m concerned that the site editors might look at your first post, see it mutate into a dissertation on advanced mathematics, and ask themselves if they really made the right decision. 🙂
This is liable to get even more complicated than it has been so far. Are you really sure you want to chase this thread to the end?
Pa,
In the original post, I did say –
That leaves the topic in this thread pretty wide open. If it is a topic or theme you want me to pursue, then it is on topic.
I think this pretty much leaves the decision in your camp. I think I can say with a high degree of probability 🙂 that the conversations I have with you are some of the most enlightening I’ve had. I enjoy them very much if for nothing else than the mental calisthenics.
However, calisthenics are more rewarding to the doer than the observer so I would like to keep these kinds of discussions out of future threads.
Future thread debates will be based on the premise that our actions can and do effect our future. That is a safe starting point that all of the regulars here will be able to accept without challange.
Based on that, I will propose and predict actions and consequences. Those proposed actions and their consequences will be the topic for debate, not epistemology. If we appear to have serious divergence due to epistomological differences, then I will post a thread exclusively for these tortuous (but oh so fun) discussions. Ayn Rand turned me onto the topic of epistemology. I hadn’t even heard of the word before I read her thoughts on it. It seems likely to me that there may be a higher than normal concentration of other people here that are also attuned to the topic.
So for the rest of this thread, if it’s a topic or theme you want me to pursue, have at it! I’ll be out for a good part of the day starting in a little while but look forward to a thought provoking read when I get back.
OK, to start of with, we observe the world through the lens of perception, and as a natural part of the way our brains work try to come up with explanations that enable us to predict it. The first problem is that there are lots of ways to explain any given set of observations, ranging from the stupid and arbitrary through to the elegant and sophisticated. It is very easy to generate an endless stream of arbitrary hypotheses – the date-dependant ones I described earlier is just one way – but there are also usually a large number of elegant and sensible sounding hypotheses possible too. They can all explain the data you have seen so far, and you can never be sure that there aren’t some other even better ones that you just haven’t thought of.
The second problem is that the real world is not simple. There are thousands of inter-related effects and influences and this makes the truth exponentially complicated. We cannot look at the whole lot, posit a hypothesis to explain it all in one go, and just test it. If each factor could influence the result in ten different ways and you have a hundred factors, there are ten raised to the power of one hundred (a googol) possible hypotheses. That’s a considerably larger number than there are atoms in the observable universe. We have to break the problem down, simplify, just look at isolated bits of it and pretend that the influence of all the rest is just randomness.
So, we can generate a hypothesis and make predictions with it. If they turn out not to fit, the hypothesis is clearly wrong. We have to be careful here because of the second problem – our hypothesis on a particular part of the problem might be right, but the result thrown off by one of the other factors we are ignoring – so we have to pretend those other factors are random, assume they obey the mathematical laws of randomness, and gather more data. If it fails to fit often enough, we can reject the hypothesis, and try another one. But if it passes the test we still cannot relax. There will still be many alternative hypotheses that we haven’t eliminated yet.
This process of forming a hypothesis and moving on to the next one when it gets rejected works, but is dreadfully slow. The space of possible hypotheses is so vast that to try to explore it by setting up camp, waiting until something goes wrong, and then moving a little way to set up camp again is not going to find you the best camp site in a hurry. You have to be more systematic about your search.
So the first thing we can do is to actively seek out alternative hypotheses. We generate them, judge them on whether they fit what we already know, are they simpler, more elegant, more general, and then we set them up in competition against each other. We examine them to find out where their predictions differ, and then deliberately set up the situation to see what will happen. This eliminates bad hypotheses at a far more rapid rate, and hones the solution very quickly.
Another problem you get is that some hypotheses are explanatory but not predictive. The classic example of this is the “the Gods did it” explanation, which is capable of explaining anything, but predicts nothing. Maybe the Gods will decide to do something else tomorrow. Another problem is that some hypotheses are generated that make the same predictions as another theory except in extreme, difficult to implement circumstances. Such theories are very difficult to test, and it may not be immediately clear which is the more elegant, so they tend to stop the process. These theories are a problem because they are not falsifiable. We therefore try to avoid them because we know they don’t face the same competition as other ideas and so we can be less sure they are any good. Theories must predict, and they must predict something different. The other stuff might be true, but it’s no good to us because it defeats our only valid method of finding that out.
So, how do we go about testing our range of hypotheses once we’ve got them? I’ll do that in the next comment.
Now we have a number of hypotheses we want to set up in competition with each other (in accordance with problem 1) and because they only deal with a part of the problem, treating all other factors as random (as discussed in problem 2), they generally make probabilistic predictions. Some theories will predict probabilities of 1 or 0, certain or impossible, and if these are mutually exclusive, you can decide between them in one go. But we are rarely so lucky. I am afraid we are doomed to explore the murky waters of statistics.
The most important aspect of the hypotheses we examine is not that they are definite, predicting near certainty or near impossibility, but that they are different. The more different they are, the easier they are to separate. If one predicts a probability of zero, and the other a probability of one in a million, they are hard to separate.
We measure the difference between hypotheses using what is called the log-likelihood ratio. First, some notation…
The probability of making an observation O given that hypothesis H is true is written P(O|H). This is a prediction. We normally have a range of observations possible O1, O2, O3…Oi… and a range of hypotheses – we’ll take two for simplicity H1 and H2, but in general there could be more.
Now, if we make an observation Oi, the log-likelihood ratio is log[P(Oi|H1)/P(Oi|H2)], that is, the logarithm of the probability assuming H1 divided by the probability assuming H2.
This is a measure of the evidence provided by observation Oi in favour of hypothesis H1 over hypothesis H2. If you swap H1 and H2 round, you get the negative of this number which is the evidence in favour of H2 over H1.
The average value of the log-likelihood ratio, averaged over all possible observations and weighted by the probability P(Oi) of them occurring under each hypothesis, is called the (Kullback-Leibler) information in the distribution in favour of that hypothesis. The sum of the informations for all hypotheses is the divergence and measures how different the predictions are. If the two hypotheses make exactly the same predictions, then the divergence is zero. If they make very different predictions, the divergence will be a large value (up to and including infinity). You want new hypotheses with large divergences from what you have already.
So, if our initial beliefs in the hypotheses before we do the experiment are P(H1) and P(H2), we can work out a log-likelihood ratio log[P(H1)/P(H2)] that measures how much stronger one is than the other. After the experiment, we have the belief in each hypothesis given the observation P(H1|Oi) and P(H2|Oi) and a new log-likelihood ratio log[P(H1|Oi)/P(H2|Oi)] = log[P(H1)/P(H2)] + log[P(Oi|H1)/P(Oi|H2)]. This says that the evidence after the experiment is the evidence we had before the experiment plus the evidence generated in the experiment. (This formula assumes the experiment is independent of any previous evidence – that would be a complication we really don’t need.)
Now, if H1 predicts O1 with certainty and H2 says it is impossible, we have log[P(Oi|H1)/P(Oi|H2)] = log[ 1 / 0 ] = log[infinity] = infinity. We have infinite evidence, and we are absolutely certain of our conclusion; at least, in choosing between H1 and H2. H2 has to be false, what H1 is depends on whether there are any other hypotheses to consider. We can structure them so that there aren’t – if we put H1=”theory X is wrong” and H2 = “theory X is correct” – so it is certainly possible even in this approach to get absolute certainty, but there are many occasions when we cannot do this.
Requiring infinite evidence is limiting, and generally unecessary. If each experiment offers one unit of evidence (which is not unreasonable), and you perform a hundred independent tests, then you have a hundred units of evidence, which corresponds to H1 being ten raised to the power of a hundred times more likely than H2. This is so incredibly close to certain that one would have to be either insane or ignorant of what it really means to doubt it. The difference between that and certainty is of purely philosophical note. In practice, scientists are rather more easily satisfied, going for evidence thresholds around 1.3 to 2, (95% and 99% confidence). It is good enough to guide the search, and much cheaper!
So…, to go back to your latest version:
You have your initial observations D+L1=0 that have led you in your search to a hypothesis H which makes predictions P(Ri|H). You perform the experiment and get a particular outcome R which was predicted with probability P(R|H). If P(R|H) was zero, we can immediately reject H as false. But if it was non-zero, whether equal to 1 or anything else, we cannot claim it to be true, we can only say how it has changed relative to the alternatives. If our hypothesis H predicts P(R|H) = 1, but an alternative hypothesis H’ predicts P(R|H’) = 1/2, then we have gained confidence in H by an amount log[1/(1/2)] = log[2] = 0.3. Despite H being certain and H’ having no idea, the evidence this gives us in favour of H is still relatively weak. And this is the case even when the lens of perception (L1 and L2 in your version) remains the same throughout.
So far, I think we are not in disagreement, if you know what I mean.
But in your next bit you seem to be saying that you just skip the first stage of deriving the hypothesis from observation, and just propose a hypothesis. This is not an unusual thing to do (mathematicians commonly argue on pure symmetry grounds), but the rest of the process is exactly the same. You make your prediction, you make an observation, and your confidence in the hypothesis either goes up or down. But only in very special circumstances does it go up or down to plus or minus infinity. Just because an experiment gives the right answer doesn’t mean that your hypothesis is certainly correct, it won’t be unless there are no other possible hypotheses that could predict P with non-zero probability.
So in your second case, you are still not certain. There will almost certainly be many other hypotheses that would explain all the facts, and because of the complexity of the world you are not going to be able to eliminate all those other factors that we treat as randomness to get nice neat 1 and 0 prediction probabilities.
Your only way round this is to word the hypothesis in such a way as to get round the possibility of other explanations and other contributing factors. One method, as described above, is to include the performance of the experiment into the hypothesis, making a sort of meta-experiment. So I might set MH = “H1 can be experimentally demonstrated with at least 95% confidence over H2”. The meta-hypothesis is tested against an alternative that predicts H1 can’t be so demonstrated, and so predicts a probability 0 of the experiment succeeding. If the experiment does succeed, then the meta-experiment gives an absolutely certain knowledge of just how uncertain you are about H1. It’s technically true, but it’s a cheat.
This reflects the essential feature you are looking for – you have to generate a hypothesis so that all possible alternative hypotheses predict probability zero for some outcome you can actually observe. Doing this in a negative sense is easy, but making positive statements of knowledge about the world, and setting an infinite evidence threshold at the same time, is very hard if not impossible.
That’s just the way the universe works. To quote Douglas Adams on the subject: “We apologise for the inconvenience”. 🙂
I definitely agree the middle ground is immoral. That’s because the only way to take an intentional middle position is to reject all principled positions. It’s a perfectly mechanical algorithm of anti-mind that absolutely guarantees to disagree with everyone, especially if they’re rational and right. Therefore it’s probably the only approach which truly guarantees a result that isn’t moral. As such, it’s probably the worst infamy in existence.
Julian Morrison writes:
Another strawman redefinition that is then destroyed.
Opposition should not be to “principled positions”, but to “untrue positions”. If the truth lies between two false positions, no matter how principled, it is on the middle ground.
Best regards
While we are back on the “middle ground”, Midwestener wrote, of me:
Looking at these words, I wrote “not my usual way”. This is not a round-about way of writing “usually my way”; I read it as exactly the opposite.
Next, Midwestener wrote, responding to me:
I certainly acknowledge the existence of right and wrong, and don’t recollect saying otherwise.
I think Midwestener has misunderstood the subtlety of the point I was making here, in this particular case. That was in showing that alternative compromise positions exist (even though one knows they are not the correct or best position) it can be helpful in analysing what are the true underlying differences between two positions, both of which seem polarised, one of which is correct, and one of which is totally wrong. Having got the party in the wrong (and perhaps the other one too) to see that analysis, one then breaks the bad news that, in this particular case, all those middle ground positions are wrong too, and the opposing position is right.
I said I was not keen on this (“not usually my way”), and pointed out one of the dangers that I see (of ending up with a wrong compromise solution) because the last and important bit of the explaining process was missed out, or not successful.
What I am happy to give Midwestener, and have not previously stated in this thread, is that (in at least some cases) a compromise solution can be worse than either of the somewhat wrong polarised positions (not least sometimes because the compromise is not self-consistent).
[Note aside. Most people won’t want to know this, but sometimes something similar to the above argument (of using hypothetical but actually non-existent compromise positions for analysis and explanation) is done in mathematics. What is actually a problem of discrete decision logic is reformulated as being one of continuous functions, usually so differential or integral calculus can be used to manipulate the algebra. Then, the discrete logic is reimposed to give the final solution. Those better at maths than me might know a name for this sort of thing. Lagrange Multipliers might come into it somewhere.]
Best regards
Hi Nigel, glad you’re back.
Yes. I read that wrong. I noticed my mistake some time later and have been expecting to hear from you.
WRT your paragraph beginning “I think Midwesterner …”, the case I am and will be stating and demonstrating is that those two polar positions are ‘right’. That is to say, internally and philosophically consistant. However, they are also in polar opposition. Holders of the opposing positions of equal strength will neutralize each other. I will show that efforts to reconcile them are the equivilent of trying to reconsile clockwise and counterclockwise, eastbound and westbound, positive and … well, you get the picture.
In light of this, if a compromise is viewed as a tactical position in the process of incrementalism, then it is still seeking principled conduct. But if it is seeking compromise as a goal in itself, then as Julian Morrison said, it is anti principled.
Pa Annoyed gave me plenty to work on. I’m through his first and hope to finish the second before I go to sleep. Hopefully tomorrow I’ll have a lot more observations, distinctions, some contrariwise opinions and, of course, more questions.
Nigel, Pa and lurkers (if anyone actually reads this far)
If you want these kind of discussions of how we think, not just what we think, to happen from time to time, let me know. Just a short comment in a thread suggesting that the thought process being used in the thread may be worth discussing separately from the thread’s primary topic. If it seems likely to generate some new discussion (not just repeating previous statements) then I’ll probably post it as soon as I have time to follow it through. But I *really* want to keep detailed debates of epistemology out of other threads. If there is any call for them, then I will label them that way up front in order to avoid causing irretrievable harm to any innocents who stumble into one of those threads.
PA,
Some of this is random, some of this is sequential, and some of it is redundant. Rather than trying to clean it up, I decided to post as it is before the thread is gone. Hope it is intelligible. It is in two posts.
With regard to problems one and two, there are (at least) two different kinds of hypotheses. Model hypotheses and knowledge hypotheses. The model hypothesis are the ones that can’t be tested with a possibility of a certain outcome. This situation can result from a predictive model in a time frame we cannot test, (the asteroid will hit earth on its 48th orbit in the year 3285) or are too complicated (it will rain in Peoria on Tuesday at 3:15PM for three minutes). These model hypotheses are only good for modeling and can only yield probabilities, not knowledge.
I propose (for this discussion) to not call a model a hypothesis. A model predicts probabilities, a hypothesis predicts a fact. If there are already existing very specific terms for this distinction, please tell me.
By that do you mean specifically “proved false”? And I’m a little confused on the rest of this paragraph. When you say that the hypothesis has parts that work and parts that don’t, how do we know this? Isn’t it better to call this a model? If it contains testable hypotheses that can be separated out, then it can be divided into hypotheses and each one tested separately.
Definately true. How to proceed depends entirely on what one is trying to do. Mapping a shoreline versus exploring the unknown is one way to look at it. Placing a known fact (shoreline) in approximately the right place is very useful. But one can never look at the known shoreline data and approximately guess that there is a ‘new’ world on the other side of the Atlantic. To condition the hypothesis on known data would never predict the new world if one is using known shoreline features to make the prediction. Conditioning on a failed hypothesis is NOT conditioning on the data that it was formed on (if that’s how it was formed). But to meet this qualification is must not be conditioned on the data that proved the hypothesis wrong. It must be conditioned on theories about the failure of the hypothesis or something else other than data.
Usefulness is a very useful feature. Probability and usefulness go together. But so do truth and certainty. Truth is often not probable or useful. Usefulness is often not true or certain. I’m beginning to think of it as “models seek probability” and “hypotheses seek certainty”.
My case boils down to means and goals. If you are engineering (a car or a plane), do the high probability estimate because it’s useful and efficient for purpose. But if you are doing pure research on particles or gravity, do the true and certain falsing method. A wrong but very predictive theory can waste a spectacular amount of time. I am particularly suspicious of quantum theory here. The truly amazing leaps in knowledge seldom come from theories based on observation. Did Einstein go out and observe light, gravity and energy?
I have an attraction to ‘elegant’ theories. For one thing, they occur entirely in the realm of the mind and are not effected by the observational lens during their forming. They are not conditioned and therefore limited by known data. They can be intuited based on failed hypotheses. They can also be intuited on seemingly unrelated data that inspires an idea. But mostly, they are not limited by known data.
For an example, I have a burner on the stove that cannot be turned down and released. Turn it as low as the flame will burn, and it promptly turns itself back up again. Turn it lower and it goes out and cannot be relit without turning it up high. It has been this way for years and I’d given up on it. After our conversation in which you first mentioned Bayesian v Frequentist, I did a mental exercise to ‘solve’ this problem. I had previously tried every method of adjusting it that I could think of. I had pushed and turned, pulled and turned, turned and held for a progressively longer number of seconds (with no apparent change), tried to turn past and return rapidly… In short, I had tried everything. So now I tried a non-Bayesian method. Conditioning on a hypothesis, not the known data. After a little thought, intuition kicked in and it seemed to me to resemble something totally unrelated. Its behaviour resembled the input/feedback pattern of elastic-low viscousity fluids (like standing up a spoon in chilled honey). I’m dealing with a mechanical valve (I thought) but what the heck. With elastic viscous fluids, it can take a very long time to assume the new position when pressure is applied. The longest I had previously held the valve in position was about 5-10 seconds. On the suggestion of this comparison, I held it for 60 seconds. It worked. My new concept of the valve is not of a all solids mechanical device, but rather one with fluid characteristics because it is sealed with a grease that has begun to age and get thick. I don’t know this hypothesis is correct. But is hasn’t be disproved and it works.
Using failed hypotheses to generate new ones is okay. As long as you are not conditioning on the data, but only guessing at reasons for the failures, one has the potential to step far beyond known data. “fitting them with what we already know” defeats that benefit. Like I said, I like intuition. We might think of the subconscious imagination and intuition as an alternate to the conscious observation and prediction. Except one is derived from the known and the other is unfettered by anything but the reach of the mind of the thinker.
When I refer to the “Gods in the sky” hypothesis, I am specifically refering to models that predict well but with utterly false explanations. There are some very predictive myths in the history of astronomy. These are the most dangerous of the threats to the expansion of knowledge. Especially if they have a “Gods have a bad day” or other random uncertainty trap door to cover its inadequacies. The believer is satisfied and stops looking for better hypotheses.
WRT your example of a theory that predicts strange things out at the untestable extremes, I agree with your response but add, put that strangeness into your intuition library and it may be useful when forming hypotheses.
Pa, this is funny. I’ve read all of this thoroughly. We don’t disagree on anything I can detect except for the significance of it all.
How’s if I make a bunch of observations and try to narrow down some words from their generally wider use and you tell me if it changes anything fundamental.
There is a difference between predicting a probability and predicting a ratio. That difference is do we have all of the data? If we do, we count it, calculate the ratio and make a statement of fact. If we do not have access to all of the data, then we predict a probability. This is an estimate of what the ratio would be if we could look at the total data set. We can make a prediction (say something useful and probably true) with either kind of knowledge, but we can only make a statement of fact (discover the laws of nature) on the full count.
As I understand it, probabilities are often stated as ratios, but they are in fact a ratio on a limited data set and the act of extending it into a probability turns it from a statement of fact into a supposition. But a potentially useful one 🙂
Lets segregate statements of probability from statements of fact by calling the statements of probability models.
Predicting a 1 or 0 probability (if I understand that) is falsifiable. So it is not a model. It is either true or false.
Can you explain the meaning or interpretation of the “|” between the “O” and the “H”? Does “Oi” mean all of the observations together?
I don’t understand why the evidence before the prediction (our expectations?) is useful information. I just don’t see where it fits in. The difference between our expectations and our results has something to say about the theory, (and looks to me like what the equation is doing) so maybe I’m misparsing your sentence and that is what you are saying. That would be a representation of the theory’s quality of error, rather than simply data. But it would still have the shortcoming in number 3, below.
What is a little disconcerting is that those things you are presenting make sense. But while I am somewhat able to read the symbolism, you don’t seem to be addressing the root philosophical difference. I think the total of our difference can be summed up in the statement “probabilities are truth”.
A brief look at “truth” on wikipedia show the word’s meanings to be divergent almost to ambiguity so I am learning some of the benefits of saying things with equations.
Without making value judgements on its overall usefulness, could you phrase my statements on the 10th at 5:14AM in conventional symbols and terms and then comment on whether it A- is even a coherent statement, B- says anything meaningful, C- has the logical capacity to in any circumstance ever have any use.
The components I intend are
1- conditioning on the data requires perfect lenses of perception (or identically flawed) to generate an absolute statement.
because
2- The data from multiple passes through the lens of perception must be compared with each other to generate the result
3- All knowledge gained from successful hypotheses is unbreakably tied to already existing knowledge.
and
4- Not conditioning on the data avoids all of these problems.
I think the most important one is 3, because it limits the expansion of knowledge in the absense of failed hypotheses.
Can this be rephrased? If so, what is this kind of statement called?
Mid,
I’m not sure, I got the impression your distinction between model and knowledge hypotheses was in whether the predictions were practically testable. Hypotheses generally make both sorts of predictions – some are testable and others are not. The asteroid will pass through this point this year and that point next year and through the Earth in 3285. We can easily check it’s predictions this year and next year, but is there another hypothesis that also predicts the first two but not the third? Does it matter that we had to assume the gravity of all the other planets had no overall effect, so we could do the maths? The only distinction between hypotheses on the grounds of the testability of their predictions comes later when I discuss falsifiability. That doesn’t affect problems 1 and 2.
You don’t give an example of a knowledge hypothesis, so I can’t be sure I’m reading you right. By a fact do you mean an event with probability 1? Because as far as I am concerned, facts are already included in the discussion on probabilities as just a particular case. I do not see any useful distinction in principle.
‘If they turn out not to fit’ means the predictions don’t fit reality. That might involve being proved false, or might involve being proved sufficiently unlikely to be safely rejected. As for the rest of the paragraph, I’ll try an example. Does smoking cause cancer? I pop round the cancer ward, find a cancer sufferer, and discover he doesn’t smoke, and never has. Hypothesis disproved? No, because people’s health is affected by thousands, millions of factors – every thing they eat or drink or breath, every place they go, every thing they do, in any or every interacting combination. Any or many of which could cause or cure cancer. So what you do is you take lots of people, smokers and non-smokers, cancer sufferers and non-sufferers, and you accumulate the evidence, and you hope all those millions of other factors somehow cancel out. Because if you insist on a definite prediction – if you smoke, will you get cancer – the answer will always be “I don’t know”.
You have no choice but to separate the hypotheses, but can they be validly separated? Do causes of cancer simply add together, or maybe you only get cancer if you’re also short of vitamin C, or maybe it is actually caused by working in polluting industries, but more industrial workers smoke than non-industrial? Does it depend on how long you smoke, or how heavily, or at what age you started, or whether you smoke more in the morning or afternoon, or the type of tobacco, or how dry the air is, or your genetic susceptibility, maybe on whether your personal God considers it to be a sin? By separating the question from the smokers religion, are we in fact missing the vital link? We have no time to find out, so we simply assume it doesn’t matter much and call it random.
I think maybe you do Einstein an injustice. His theories were firmly based on observation – admittedly mainly other people’s, but the idea he just sat down and thought it up with no reference to reality is no more than popular stereotype. It was based on Michelson and Morley’s experiments with light, and Faraday’s experiments with electricity, and Gallileo and Newton’s observations on mechanics. Einstein’s genius was his incredible intuitive grasp of how the real world behaved, from an education based on centuries of accumulated observation. His stroke of inspiration was to take the experimental results at their word and see where they led, rather than try to fit them to the preconceived notions of how the world was supposed to be. What he did was precisely the opposite of starting with the hypothesis and fitting the facts to them.
You have given a reasonable explanation for your gas tap, but there are of course many more. It’s usual for mechanisms to expand and change shape when hot, which can change their behaviour. Fire requires a current of air to be drawn in continuously – the hot gas from the flame rising generally does this, but it could be assisted by a hot burner outlet, so what would previously have gone out now does not. There could be a safety device on the burner that prevents the risk of the flame going out and filling your kitchen with gas, and any of the above reasons stop it working. As you realised yourself, there are many hypotheses to explain the facts and your successful experiment still hasn’t given you certain knowledge.
Nor is this a non-Bayesian method. You initially had a hypothesis that there was some trick to getting the mechanism to stay where it was put, such as pushing or pulling. You made a prediction (whether expressed with confidence or not) and when it didn’t work first time, you tried it again. That can only be because your prediction was probabilistic – you thought it might work if you tried it often enough. Eventually, your confidence in the hypothesis fell so low that previously ignored hypotheses, originally held to be unlikely, rose to prominence. You made predictions based on one of these, that the grease was viscoelastic, and found the prediction reliably successful. You have now increased your belief in the viscoelastic theory, although you understand it is not certain, and therefore only probably true. This is an absolutely classic Bayesian approach. The frequentist would have insisted on taking the valve apart beforehand to determine what it’s behavioural distribution ought to be before venturing an opinion. 🙂
What you have done is exactly what I described. You started with observations which led you to a hypothesis, which you tested experimentally. When the experiment failed the hypothesis was rejected in favour of other, initially less likely ones. Having found a more successful predictor, you now hold a probabilistic belief in the explanation that generated it. Congratulations! You are a scientist!
I’m not sure if you misunderstood my use of the term “ratio”. Probabilities are sometimes expressed as ratios, but this is just a way of writing numbers – there is generally no significance to the representation. The ratios I was talking about were the ratios of different probabilities predicted by the competing hypotheses. Only one in a hundred players are undetectable cheats but only one in half a million get a royal flush just when they need it. It doesn’t matter that the probability of cheating without being noticed is low, what matters is that it is a thousand times greater than the alternative.
The notation I was using is conditional probability. P(A|B) is the probability of A given that B is true. P(A|B) = P(A and B) / P(B). Let’s say I throw two dice, and event A is “the first dice comes up 6” and event B is “the total score is 10”. Now the probability of the first dice coming up 6 is one in six, of course, but suppose I am totld the total score was 10, given this fact, what is the probability that the first throw was a 6 now? Well, the probability P(A and B) is that the first throw came up 6 and the second came up 4. That’s one in thirty six. The probability P(B) is the probability of 6-4, 5-5, or 4-6. P(B) is therefore three in thirty six. So the probability P(A|B) is one in three. Being told that the total was 10 makes it twice as likely that the first throw was a 6.
The Oi is a particular possible outcome. The outcome might be throwing a six, or not getting the disease, or the gas burner staying lit. It just means the i’th outcome.
The evidence before the experiment may come from previous experiments. You do not start with a clean slate each time you enter the lab. It might also be based on the innate reasonableness of the theory. If your idea is initially a one in a million, and your experiment gives you 99.9% confidence, that still only gives it a one in a thousand chance of being right. If someone tells you there is a dog walking down the street outside, and you hear barking, you’ll probably accept that. If someone tells you there’s a lion walking down the street outside and you hear a roaring sound, you would probably want to check. It is a question of how much evidence do you need to overcome your initial scepticism – extraordinary claims require extraordinary evidence. I can give more detailed examples if that isn’t clear.
Putting your 5:14 into conventional terms was what I tried to do in my third block.
1. Think of it in terms of your gas burner. You don’t know how the valve works precisely, your only window into its function is experiment, and whether the gas stays lit. Does it matter that you turn the knob in exactly the same way each time you try it, or do sufficiently small variations not matter?
2. You formed a theory that led to you introducing new conditions you hadn’t tried before to see what happens. Is it necessary that the gas do exactly what it did before to prove your theory?
3. Now that you know how to get it to stay lit, to what existing knowledge is this unbreakably tied?
4. Your new theory was based on the similarity of the observed behaviour to previously observed behaviour in another context. In what way is this not conditioning on data? Supposing that you did genuinely came up with the idea out of the blue – are you thereby any more convinced that your viscoelastic grease theory is absolutely certain and true?
Not “practically testable”. But rather testable with a possibility of certain knowledge. As in falsifiable. From my youngest recollection in our family, “you can’t prove a negative” was a common rebuttal to many claims. As in “the Cubs will never win the World Series”. Someone was sure to point out that the statement was not provable and should be restated as a probability. Which could have been my first exposure to scientific notation.
As for the smoking example, smoking causing cancer is a model too complicated to falsify. There are two ways to get useful information (assuming that to be the goal in this case). You could build probability data, modeling it to a very useful degree of prediction accuracy. This may be a very efficient use of research money. Or you could seek knowledge, or should I say truth, by breaking the model into as many falsifiable hypothesis as can be made. Some likely examples
Does cigarette smoke contain chemical ‘A’?
If yes, then – Can chemical ‘A’ come into contact with cell type ‘B’?
If yes, then – Can chemical ‘A’ mutate chromasomes in cell type ‘B’ when smoking?
If yes, then – Can that mutation be cancerous?
If yes, then – …
By hypothesising each question, eat can be falsified. This generates “true” knowledge that can be used not just in the smoking model, but in any case where it applies.
I don’t remember if I said it earlier, but I would not rate the hypotheses on some probabalistic calculation of predictive accuracy, because this only predicts the what, not the why. That is why the mythical astronomical models worked. I would rate them how how big of a claim they make. The more narrow they are, the less claim they make. (Duh). The more their scope, the greater the opportunity the could have been falsified and weren’t. The more useful they are. And they are hypothesising the why. Which can bring the what right along with it. The other way around can’t.
Actually, the gas tap experimentation was a lot more than that. I had tested for temperature, performing the tests when the burner and proximity was cool, warm, and full heat. I had tested with the flame lit and with it extinguished (by using a lighter to test its final position). No safety devices at all, the range is a very old restaurant model.
I performed all of these tests looking for trends or patterns that would suggest other solutions (hypotheses). The data just plain didn’t offer any trends to extrapolate from. Unless stasis is a trend. Every effort showed the same response on release regardless of how it began. And I didn’t switch to conditioning on other data from visco elastic observations. I switched to conditioning on my beliefs about visco elasticity. Which could be utterly and preposterously wrong because I have never tested them. And as for the suggested frequentist response, I was modeling a situation where the results had to come from testing a hypothesis, not looking at the answers in the back of the book. 🙂
Incidently, in the time since I posted that comment, I have a different, similar, and more likely theory. Propane (we’re out in the country) has something in it that with time makes a very sticky buildup that feels and looks alot like the liquid gasket compound used in reassembling gasoline engines. It can be dissolved by an alcohol that is cheap and toxic (ethanol?) that we were required to handle carefully when cleaning up RV appliance parts many years ago. I now believe that may be the source of the viscous feeling.
My probabilistic belief means that I’ve created a model, not a falsifiable theory unless I adopt a different testing method.
To my 5:14A WRT your 5:28P, the place I ran into problems was right away with P(Ri|H). I don’t want Ri|H to be a probability. I want it to predict a discrete single outcome. In short, I want the entire statement to have no probability language anywhere in it. Not even “one or zero”. I’m guessing this will go against every intuition and habit you have from your professional career, but can it be done? It seems like it should be simple, but nothing ever is, is it?
For purposes of this discussion, I’m waging war on probabilities. (Not really, but just to keep the discussion on probability knowledge v falsifiable knowledge. We seem to keep getting off onto probability knowledge v er… probability knowledge.)
Thanks. To sleep. Per chance to dream. ZZzzz….
Do some brands of bleach contain chlorine?
If yes, then – Can this chlorine get into the atmosphere?
If yes, then – Can the atmosphere be breathed in by people?
If yes, then – Can breathing in chlorine kill people?
If yes, then – …
The point of which, I like to think, is that a simple analysis along the above lines is not useful enough on its own.
Causal mechanism is part of many issues, but so are the rates at which the various things happen. The rates at which things happen also vary (with a Probability Density Function or PDF) depending on the conditions, how they vary in detail and whether one knows them them enough to understand and perhaps counteract the undesirable extent.
I have not read the whole of this discussion of the philosophy of probability here for several reasons. Lack of interest is not one of them; I use this stuff nearly every day (as a visit to the publications on my website will show).
However, (as for Jim and Mandrill on one of my postings above), by brain is suffering.
I recommend more brevity.
Best regards
Nigel,
Thank you. That helps me make a point.
The question would be “can chlorine contact human cell type ‘A’?
Can one chlorine molecule kill human cell type ‘A’?
If no then
Can two chlorine molecules…
and so on until one establishes the simple binary of will using chlorine as labeled cause detectable harm to humans?
If no then question answered.
If yes, then go on to demonstrate the action consequence path to establish amounts and vectors necessary to cause defined levels of harm.
Yes, the steps will be generally uncompleteable. But to the direct extent that parts of the model fall outside of falsifiability, the model becomes weaker. It degenerates into higher and higher likelyhoods of, at the most, correlation.
By not breaking a model into as many discrete proveable steps as can be found, it becomes easier and easier to come to wrong, or at the very least, unprovable answers. It eventually deteriorates into something no more mathematically proveable than the answer to “will Brazil place higher than third in the 2008 World Cup?”
We must defend the distinction between models and proofs. You are defending the ‘Green’ method of proof. This is how correlations somehow become ’causes’. The stock response we have here when addressing a Green claim is “where is the proof?” This is an entirely different question than “where is the correlation?” And yet the two are often used, even here, interchangably.
Case in point –
Hemline theory.
Hey, it works.
Midwestener, you’ve done it again:
You are putting onto my keyboard, statements that are the exact opposite of what I believe. And also the opposite of what I think I have typed.
As to the “one molecule, two molecule”, I think you are right, but it is not useful, in practice, to phrase it that way. Try this way.
(i) Is substance X known to be harmful to humans in large concentrations?
(ii) If so, what concentrations and exposures are seriously harmful? Show this by plotting a PDF curve of the probability of harm against concentration/exposure, or several PDFs for varying levels of harm, or a 3-D plot.
(iii) Do such serious exposures occur sufficiently frequently for us to need to do something more about it than we do at the moment?
(iv) Do the circumstances of such exposures have other benefits; if so what are they and how beneficial are they?
(v) If they do, where should we draw the line (in the middle ground) as to what concentrations/exposures should be permitted.
[And a lot more detail too, I’m sure.]
I am sure the totality of your philosophy ought to be able to cope with practical means of measurements of quantity and its implications including quantity of chance of certain things happening, as well as the existance, or not, of causality specified in great detail down to that for each count of individual molecules.
Best regards
Nigel,
Your questions i though v all give probabalistic answers. My point is that to have certain knowledge rather than varying plausibilities of likelyhood, requires questions with binary output. False, not false.
I hope I haven’t argued against the usefulness of probability studies. What I am arguing against is the universal usefulness of ‘usefulness’. It is good for predicting but can be worse than useless, it can be down right destructive, of understanding.
My reference to the hemline theory was not intended entirely for humor. It is a very strong correlation. But do we now regulate skirt length in an attempt to control the market? I seriously believe that someone could or even has made a plausible case for this working. But it would be a model. Just a theory. Not certain. I believe that it is a correlary, not causal relationship. It is based on probabilities, not certainties.
Knowledge to predict and knowledge to understand are two different things. Prediction is served by probabilities. Understanding is served by certain knowledge.
Midwestener writes:
No.
Or rather yes: if one views a probabilistic theory as certain knowledge at the level at which it applies.
Consider gas pressure etc. For many purposes, Boyles Law, and others bits of similar physics (the Gas Laws), are quite adequate.
However, if you look down a microscope at pollen suspended in air, you see Brownian Motion. This is not explained by Boyles Law.
Both of these are averaged effects of probabilities of molecules in air, at two different levels of detail, according to what is generally known as thermodynamics.
Two levels of detail which are themselves probabilistic “averages” of a physical model at an even lower level of detail. The lower level of detail doesn’t matter in certain circumstances and does matter in other circumstances (eg superfluidity), the latter being adequately explained by the theory of Statistical Mechanics.
Best regards
Nigel,
The brevity point is appreciated. I had tried for brevity before (possibly unsuccessfully), and it led to confusion. Given the difficulty of the philosophy, I was hoping to at least frame the debate with some shared terms of reference, which unfortunately take some time to explain.
Mid,
Even your attempt to break the smoking problem down runs into problems. Does smoke contain chemical A? Maybe. Fire basically chops molecules up into fragments, and as it cools they stick together in random combinations. Many chemicals are very likely to occur, some will be very rare. And there are thousands of possibilities. Too many alternative hypotheses, too much complexity. Problems 1 and 2 again.
You say you’re waging war on probability, but I don’t know why. Neither scientific enquiry nor everyday life operate in such an artificial environment. Even if you constructed such a theory, what would it describe?
If the point is to answer relativists who claim truth is subjective, and therefore all points of view allowable, there are far easier ways. I won’t go into that just now (in the interests of brevity!) but rest assured that I would support your objections to subjectivism, and consider it can be done on technical grounds.
Models usually are adequate. That is the nature of a ‘useful’ model. But like ‘Gods in the sky’ astronomical models, prediction and understanding are not the same.
I think visually. That makes it very difficult for me to put words to what I see but a metaphor presents itself.
I was driving in fog this morning. Probably about 1/8 to 1/4 mile visibility. I could see shapes in the fog. By their location and actions I ‘knew’ them to be other vehicles. Because it was rather light, many did not have lights on. So based on dark patches making familiar movements in the fog, I identified them as cars. Most of them were. To a very high degree actually. But some of them were not. They were varying artifacts caused by the surroundings and my own motion.
Some cars had lights on. Artifacts don’t have headlights. This is the difference between probability and certainty.
I view falsible data differently than a model. While it can not be stated ‘true’, it can be stated false. As the area covered by a falsifiable claim grows, it is true that it is more likely correct. But when this likelyhood is combined with probability of other questions, the possibility of certain proof (to the negative) dissappears. A negative to a specific question is lost in the net probability of the entire model.
This does not mean that saving tons of time to get a high probability prediction is wrong. It means it is wrong for some purposes. One of those purposes for which it is wrong is the search for understanding. By removing the possibility of any negatives, we remove the only possible source of certain knowledge.
Models are the only possible way to deal with problems as complicated as the smoking question. But what you are saying “since we can’t be certain of the conclusion, we might as well not worry about the individual steps. After all, the net result is the same.” And I am saying it is not. That any of those discrete steps that can be tested by falsification contribute to understanding. Even though they may not have a net effect on predictiveness of the entire model.
Excuse me. The advantage of a classical education is that it enables you to despise the wealth that it prevents you from achieving. Help me! Can not find sites on the: Bathroom mirrors oval. I found only this – consumer debt relief plan. Illuminated glass shelves shower steam ceiling down lights. Real estate investment as a long term strategy. With best wishes :-), Umi from Monaco.