by Philip Pilkington
I thought it might be worthwhile to follow up my previous posts on probabilities with one in which I first of all clarify one or two points and second of all show why certain people are attracted to Bayesian statistics. I draw on a paper that Lars Syll (who else?) has drawn my attention to by the philosopher Clark Glymour. The following post will be an extended comment on Glymour’s excellent paper.
Please share this article – Go to very top of page, right hand side, for social media buttons.
Bayesian statistics is a subjective measure of probabilities. What that means is that it assumes that we hold degrees of belief about certain things and that the probabilities are contained within these degrees of belief – they are not, as it were, “out there” and objective. The key to the method, however, is that we can articulate these degrees of belief in numerical terms. Glymour writes (note that he refers to “Bayesian subjectivists” as “personalists” in what follows):
We certainly have grades of belief. Some claims I more or less believe, some I find plausible and tend to believe, others I am agnostic about, some I find implausible and far-fetched, still others I regard as positively absurd. I think everyone admits some such gradations, although descriptions of them might be finer or cruder. The personalist school of probability theorists claim that we also have degrees of belief, degrees that can have any value between 0 and I and that ought, if we are rational, to be representable by a probability function. Presumably, the degrees of belief are to covary with everyday gradations of belief, so that one regards a proposition as preposterous and absurd just if his degree of belief in it is somewhere near zero, and he is agnostic just if his degree of belief is somewhere near a half, and so on. According to personalists, then, an ideally rational agent always has his degrees of belief distributed so as to satisfy the axioms of probability, and when he comes to accept a new belief he also forms new degrees of belief by conditionalizing on the newly accepted belief. There are any number of refinements, of course, but that is the basic view. (p69)
Okay, so how do we come up with this numerical estimate that transforms what Glymour calls a non-numerical “grade of belief” into a properly numerical “degree of belief” between 0 and 1? Simple. We imagine that we are given the opportunity to bet on the outcome. Given such an opportunity will then force us to “show our cards” as it were and assign a properly numerical degree of belief to some potential event.
Let’s take a real example that I used in comments of my last post: what are the chances that a woman will call me tomorrow morning between 9am and 11am and can I assign a numerical probability to this? I would say that I cannot. “Ah,” the Bayesian will say,
“but you can. We will just offer you a series of bets and eventually you will take one and from there we will be able to figure out your numerical degree of belief in probabilistic terms!”
This is a similar process to a game teenage boys play. They come up with a disgusting or dangerous act and then ask their friend how much money they would want to do it. Through a sort of bargaining process they arrive at the amount that the person in question would undertake this act for. They then discuss amongst themselves the relative price put by each on said act.
I think this is a silly load of old nonsense. The assumption here is that locked inside my head somewhere – in my unconscious mind, presumably – is a numerical degree of belief that I assign to the probability of an event happening. Now, I am not consciously aware of it, but the process of considering possible wagers brings it out into the open.
Why do I think that this is nonsense? Because I do not believe there is such a fixed degree of belief with a numerical value sealed into my skull. Rather I think that the wager I eventually accept will be largely arbitrary and subject to any number of different variables; from my mood, to the manner in which the wagers are posed, to the way the person looks proposing the wager (casinos don’t hire attractive women for nothing…).
Back to my example: what are the chances that a woman will call me tomorrow morning between 9am and 11am? Well, not insignificant because I am supposed to be meeting a woman tomorrow morning at 11.30am. Can I give this a numerical estimate? Well, it certainly would not be 0.95. Nor would it be 0.0095. But to ask me to be any more accurate would be, in my opinion, an absurd undertaking. And if you convinced me to gamble on it the wager I would be willing to accept would be extraordinarily arbitrary.
Already having reached this far in the argument I suspect that there is some emotional trickery at play. What follows only confirms this. Let us play along with our Bayesian here for a moment, despite what they’re saying being obvious psychologically-deficient nonsense. Glymour writes:
Let us suppose, then, that we do have degrees of belief in at least some propositions, and that in some cases they can be at least approximately measured on an interval from 0 to 1. There are two questions: why should we think that, for rationality, one’s degrees of belief must satisfy the axioms of probability, and why should we think that, again for rationality, changes in degrees of belief ought to proceed by conditionalization? One question at a time. In using betting quotients to measure degrees of belief it was assumed that the subject would act so as to maximize expected gain. The betting quotient determined the degree of belief by determining the coefficient by which the gain is multiplied in case that P is true in the expression for the expected gain. So the betting quotient determines a degree of belief, as it were, in the role of a probability. But why should the things, degrees of belief, that play this role, be probabilities? Supposing that we do choose those actions that maximize the sum of the product of our degrees of belief in each possible outcome of the action and the gain (or loss) to us of that outcome. Why must the degrees of belief that enter into this sum be probabilities? Again there is an ingenious argument: if one acts so as to maximize his expected gain using a degree-of-belief function that is not a probability function, and if for every proposition there were a possible wager (which, if it is offered, one believes will be paid off if it is accepted and won), then there is a circumstance, a combination of wagers, that one would enter into if they were offered, and in which one would suffer a net loss whatever the outcome. That is what the Dutch Book argument shows; what it councils is prudence. (p71 – My Emphasis)
Yep, that’s right. It’s assumed by the Bayesian that if one moves to maximise one’s gains in the betting game while deploying degrees of belief that are not probabilities (I guess, my mea culpa above would indicate that I would likely be guilty of this) then we get our money taken from us over and over again like a sucker at a three-card Monte stall. The Bayesian edifice rests on a threat manifesting as a thought experiment. Glymour continues:
The Dutch Book argument does not succeed in showing that in order to avoid absurd commitments, or even the possibility of such commitments, one must have degrees of belief that are probabilities. But it does provide a kind of justification for the personalist viewpoint, for it shows that if one’s degrees of belief are probabilities, then a certain kind of absurdity is avoided. (p72)
Yes, that’s right. What the Bayesians do is they set up a thought experiment that probably doesn’t correlate at all with the real world and then they say
“Well, within this experiment if you don’t align your degrees of belief with probabilities you will end up in absurd situations in which you are constantly robbed and you’ll look like a total clown”.
This is a rhetorical trick. Very similar to the one played by marginalists who claims, for example, that one should be selfish to maximise one’s gain or that firms that do not align to market forces always capsize under the weight of competition.
More than that though, it is a particularly neurotic fantasy. Where the tired old marginalist fables play to a person’s selfishness, Bayesianism plays to a person’s insecurities. It assumes that the world is out to effectively rob you if you don’t fall into line with the Bayesian mode of thinking – as manifestly unrealistic as this mode of thinking is and as unproductive as it may prove. The world becomes a “bad place” and only by thinking in line with the Bayesian doctrine can one avoid its evils. I have in the past heard some compare Bayesianism to a religion. Now I understand why. Although its less a religion and more so a cult as these are precisely the sort of tricks that cults use to brainwash their followers.