**Previous message:**Humanist Discussion Group: "13.0390 MLA session for 12/2000"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

Humanist Discussion Group, Vol. 13, No. 391.

Centre for Computing in the Humanities, King's College London

<http://www.princeton.edu/~mccarty/humanist/>

<http://www.kcl.ac.uk/humanities/cch/humanist/>

Date: Thu, 10 Feb 2000 07:51:55 +0000

From: "Osher Doctorow" <osher@ix.netcom.com>

Subject: two alternatives in decision-making

Dear Willard McCarty:

I'd like to entitle this contribution Computer Testing of Two Alternative

THeories in Decision-Making. I'm going to translate everything into

ordinary Humanist English here, under the plausible conjecture that if it

can't be expressed that way, it's probably not real.

Microsoft decision-making algorithms and programs are based on Bayesian

conditional probability algorithms. For the non-specialist, a simple

illustration of conditional probability is the probability that it will

rain given that it is snowing, which is calculated as the probability of

both rain and snow divided by the probability of snow. This is

symbolically written P(rain given snow) or more simply P(rain/snow), where

/ means given. Notice that this is undefined when the probability of rain,

symbolically P(rain), is 0, since you can't divide by 0 in mathematics.

Logic-based probability (LBP), which the author has developed since 1980,

assigns a probability to the logical conditional (which in unrelated to

conditional probability) if a then b, which can be written a arrow b or

a-->b or with the above example if snow then rain, i.e., snow---> rain. So

we calculate the probability that if it snows then it rains. It turns out

purely mathematically that this probability is the probability of rain and

snow minus the probability of snow plus 1, and it is denoted P(snow -->

rain) = P(snow and rain) - P(snow) + 1 where P( ) is probability of.

If you compare the two, Bayesian P(rain/snow) and LBP P(snow --> rain),

they only differ algebraically by replacing division in the former by

subtraction in the latter, noting that the occurrence of 1 insures that the

result is a singule probability in P(snow --> rain). The Bayesian

P(rain/snow) does not contain a 1 in its formula, and therefore turns out

not to be a single probability but a ratio of probabilities, which has the

disadvantage of being undefined when the denominator P(snow) is 0. Of

course, the actual computed results for the two alternative types will be

very different in most situations, because the results of division and

subtraction differ enormously mathematically.

Now comes the computer. Microsoft wants very much to market its

(statistical) decision algorithms, based on Bayesian conditional

probability, also called Bayesian probability or conditional

probability. So it does just that. The author, on the other hand,

develops a computer algorithm using LBP probability. Which do you think

works much better? The answer is LBP.

Why does LBP work much better than Microsoft's Bayesian version? After

all, they only differ algebraically by changing division to

subtraction. But that is precisely the point. Since you cannot divide by

0 in Microsoft's division-based algorithm, you miss very rare events

(which, believe it or not, are assigned probability 0 under certain general

assumptions in probability and statistics - technically continuous

probability distributions if you want to know). So Microsoft cannot deal

with very rare events (genius, great inventions, catastrophes, strokes of

enormous luck, etc.). This does not happen with LBP, because it has no

division. It also turns out that there are some very common events which

have probability 0, believe it or not, namely, what are called lower

dimensional objects in ordinary 3-dimensional space under the above

techincal continuous probability assumption. For example, a line or a

piece (segment) of a line in 3 dimensional space has dimension 1 because it

only has length, and a plane or planar object like a rectangle has

dimension 2 because it only has length and width, and a single point in 3

dimensional space has dimension 0 because it has neither length nor width

nor depth. All of these objects have probability 0. Since it can be shown

that the surface of any 3-dimensional object (for example, a person's skin,

or the surface of the earth) is 2-dimensional, it also has probability

0. Therefore, Microsoft's program cannot deal with the surface of the

earth, the surface of a person's body (the skin), the surface of a person's

internal organs, the surface of a cell, etc.

The most remarkable result of all this is that Microsoft's programs only

give comparatively trivial decisions. After all, if you cannot handle

rare or unexpected events, and you cannot deal with the surfaces of

physical objects, and you cannot deal with centers of physical objects

(which are points of dimension 0), etc., what else is there?

Well, there is still enough left to make somewhat obscure

predictions. After all, there really is such a thing as conditional

probability, the probability of rain given snow for example. So Microsoft

can compute (it theory, anyway) the probability of rain given snow, and if

you translate that into business or economic or political or other

decisions, it can compute various probabilities that tend to be on the

mediocre level of conceptual interest and importance.

The really conceptually interesting and important decisions come from LBP

probability algorithms. For example, LBP can tell you what decisions to

make if a rare meteor hits a spacecraft, if a politician is assassinated,

if a star is born, and so on ad infinitum. One important qualification

should, however, be noted. Microsoft's Bayesian probability ratios tell

you the probability of rain if the event of Snow is fixed. In other words,

no pun intended, if you freeze the event that It is Snowing at a moment in

time, and then asked what the probability of rain occurring is for that

level of fixed Snowing, Microsoft's Bayesian probability ratio will give

you an accurate result. LBP's probability P(snow --> rain) gives you

something different entirely. Instead of telling you the probability that

it will rain for a fixed or frozen level of snow, it tells you the probable

influence which snow will have on rain. On a scale from 0 to 1, which is

the standard probability scale, an LBP probability of snow --> rain of

value 1 means that snow has maximum possible influence on rain, and so

on. In analyzing decisions, it obviously is much more important to know

how much one event influences another than to know what happens to one

event when a second event is fixed or frozen, but there are conceivable

scenarios (like simple gambling with cards) where both are useful to know.

In the above situations, computer algorithms and the results of computer

programs even in thought experiments or Gedanken experiments helped to

clarify the nature and the preferences of and between two competing

probability types. Hopefully, readers will be interested in following up

or adding to these examples.

**Next message:**Humanist Discussion Group: "13.0392 new search-engine company; jobs there"**Previous message:**Humanist Discussion Group: "13.0390 MLA session for 12/2000"**Messages sorted by:**[ date ] [ thread ] [ subject ] [ author ]

*
This archive was generated by hypermail 2b29
: Thu Feb 10 2000 - 07:59:58 CUT
*