_______________________________

 

Custom Search

 


 

. . . And Still We Evolve
A Handbook on the History of Modern Science

[This handbook, which has been prepared by Ian Johnston of Malaspina University-College, Nanaimo, BC (now Vancouver Island University), for Liberal Studies students, is in the public domain and may be used without charge and without permission, provided the source is acknowledged, released May 2000. For comments and questions, please contact Ian Johnston]

[Table of Contents]

 


 

Section Four: The Beginnings of Modern Probability Theory

Introduction

Probability, as the concept is most commonly understood in everyday language, is a mathematical expression of the relationship between a particular outcome of an event and total number of possible outcomes. In its most frequent form, say in a simple gambling game, like a flip of a coin, probability becomes the ratio of the chances of winning (expressed in a number of ways: 50-50, a chance of 1 out of 2, 50 percent chance, a .5 probability, and so forth).

As a systematic mathematical study, probability emerged in the 17th century, and even a cursory look at its history in the 18th and 19th centuries provides a fascinating study of the shifting attitudes and hopes of the Enlightenment for clarity and certainty, not just in science, but in all areas of life. In some respects, a study of this topic is more revealing than a study of most other areas of scientific inquiry, because the mathematical probabilists tended to be particularly ambitious about extending their analytical methodology into many different subjects. Most were strict determinists, keen to find the precise mathematical formulas which would enable them to clarify moral, legal, and scientific disputes with certainty. The difficulties they encountered and their responses to them provide an illuminating glimpse into some of the main trends of Enlightenment thinking about the uses and abuses of the new scientific rationality.

If the attempt of the mathematicians eventually failed to realize all that the most optimistic of them hoped, out of this development arose something central to our present experience, the application of mathematical probabilities to society as a whole, a trend which has fundamentally changed the way we think about ourselves, in some ways just as significant as the changes brought about by historical science.

 

Chance and Risk

The concepts of chance, fortune, and luck are as old as the first dice games, and for centuries human beings had speculated about probabilities in connection with legal questions of evidence and contracts and, at times, insurance schemes (especially for trading voyages). However, not until the 17th century did these concerns lead to attempts to understand the principles mathematically.

Risk taking, for example, had a long tradition before the advent of studies in the mathematics of probability. The Babylonians had had forms of maritime insurance; the Romans had annuities (exchanges of a lump sum in return for regular payments over a long time--the risk being that the person taking out the annuity will not live to collect the lump sum); and gambling had existed since time immemorial.

The precise origins of the interest in the mathematics of probability in the 17th century are a contentious issue involving disputes about the various contributions of astronomy, fine arts, gambling, and insurance. The issue is difficult because the development of mathematical probability did not require any startlingly new discoveries in mathematics itself (something which might conveniently provide an obvious starting point).

Nor did the beginnings of mathematical probability have to await the development of some new theory of chance. The prolonged religious wars of the 17th century obviously contributed a sense that certainty (at least in religious issues) was not available. But throughout this time and the 18th century, the central doctrine of causation held that all things which happened were strictly determined (mechanically or in some other fashion), and thus chance was merely apparent, a manifestation of human ignorance. Since causeless events were unimaginable, there was no concept of genuine randomness (1).

One obvious source of the increasing need for attention to probability theory arose from matters concerning contract equity in dealings involving elements of chance (e.g., games, annuities, joint speculative ventures), those legal agreements which rested upon the value of an uncertain future prospect.  With the rising commercial spirit of the 17th century, especially those endeavours involving overseas trade, such issues were crucial.  Hence, much of the early development of this subject concerned expectations, the numerical worth of a future outcome, rather than the probability itself.  In other words, the analysis focused on the value of the successful outcome, rather than on the mathematical chances that the outcome would be successful (more about this later).

The problem of risk-taking ventures was an important business issue also in challenging the church's attitude to interest rates and profit, especially in aleatory contracts (literally "contracts which depend on a roll of the dice," i.e., which involve a great deal of risk). Those who took such great risks, it was argued, should be able legitimately to claim a certain profit, what was known in the trade as "the price of the peril."

There were no firm mathematical rules for translating risk into compensation, other than the ancient notion that the financial return should be somehow proportional to the amount invested (the Rule of Fellowship) and to the length of time involved. Lacking certain agreed-upon or mathematical rules and reliable or complete statistics, jurists, businessmen, and theologians frequently debated the issues in terms like the following: "What is the sum proportioned to the indefinite and uncertain gain which pledges as backing for a Society of Merchants?" In most cases, the appeal was to experience, rather than to mathematics.

In the debates on these and other issues (lotteries, games of chance), the emphasis fell on keeping the expectations fair. The idea was that if all the partners to an enterprise or the players in a game had an equal expectation of the outcome, then the enterprise was fair.

The state in which all of them share, with the same uncertainty of the event and with the same right, having rendered their condition equal, also makes their agreement just. (Jean Domat, 1689)

Equality of expectations rather than calculations of risk by a computation of probable outcomes was the operative principle, just as it still is in the minds of most gamblers with lottery tickets: the reason for purchasing the ticket is the equal expectation; few purchasers, if any, have any idea what the probabilities for a successful draw are.

 

Early Calculations of Probability

The traditional beginning of modern probability theory is the exchange of letters in July and October 1654 between Blaise Pascal (1623-1662) and Pierre Fermat (1601-1665), two French mathematicians. The letters were written in response to the following problem: Two players, A and B, each stake 32 pistoles on a three-point game. When A has 2 points and B has 1 point, the game is interrupted and cannot continue. How should the stakes of 64 pistoles be fairly distributed?

Pascal divided the solution into two parts. Whatever the outcome of the game, A should have at least one-half of the total (32 pistoles). Therefore, the uncertain expectation concerned only the other half, and A had a 50 percent chance of winning that. Therefore, the fair distribution would be that A received 48 pistoles (the certain 32 and one half the uncertain 32), and B received 16 pistoles.

In keeping with the legal traditions of his time, Pascal's emphasis here is on expectation and equality between the two players, the central point of his analysis, rather than on the calculation of the probability outcomes and their associated values (2).  Hence, he eliminated probability from as much of the problem as possible, using certain gain and equity in its place (3).

The Pascal-Fermat exchange remained unpublished (until 1679).  The credit for the first published work in mathematical probability, therefore, went to two Dutch mathematicians, Christiaan Huygens (1629-1695), who had heard of the Pascal-Fermat problem and solutions, and Johann de Witt (1625-1672). Like Pascal's letters, the work of Huygens and De Witt continued to stress the mathematics of expectations. Huygen's book On Ratiocination in Dice Games (1657) contained the first published study of mathematical expectations, a series of analyses of the different expectations in various games of dice (4).

For Huygens a fair game was one in which each of the players had an equal expectation, so that the game did not work to the disadvantage of anyone who had purchased entry into the risk.  Hence, if the game was fair, the players should be able willingly to exchange expectations. 

Once again the modern order of reasoning regarding expectations was inverted: instead of the game being fair because the probabilities (and therefore the expectations in a symmetric game) are equal for all players, the probabilities are (implicitly) equal because the game is assumed fair--and the game is fair because the conditions of the players are indistinguishable, as shown by their willingness to exchange expectations. (Daston 26)

De Witt, in a series of letters written in 1671, extended the mathematics of probability to an area outside games of chance. He attempted to analyze annuities on the basis of mortality (to correlate the probability of death and age). Lacking statistics, De Witt simply assumed equal probability for dying in any six-month period from the ages of three to fifty-three.

This early emphasis on expectations rather than on probabilities, deduced in the absence of adequate statistics, was unable to deal with more complex situations involving different degrees of risk (i.e., situations where the players were not equal). In legal cases where such complications arose, the decision was left to the practical experience of the judge.

 

Degrees of Certainty

In addition to concerns about games of chance and contracts, early probability mathematicians turned their attention to something apparently more subjective, the degrees of certainty in legal evidence (i.e., the mathematics of moral and legal proofs). The evaluation of evidence along a graduated numerical scale had for a long time been an element in Roman and Church law. It was, therefore, natural enough that this area should provide problems inviting mathematical analysis, especially by those seeking legal reforms, so that one could have a surer and more persuasive way of marking the stages between ignorance, uncertainty, and certainty.

The first published statement of principles in this area was Ars conjectandi (The Art of Conjecture) by Jakob Bernoulli (1654-1705), published posthumously in 1713 and the most important work on the subject in the 18th century. Bernoulli interpreted probability as a state of mind rather than as a state of the world (for, in line with standard beliefs of the time, there was no randomness in the world of nature: "All things which exist or are acted upon under the sun--past, present, or future things--always have the greatest certainty") (5). As a mental state, probability, in his definition, "is a degree of certainty and differs from it as a part from the whole."  The application of this definition marked a shift away from expectations to probabilities and from equiprobable outcomes to measures of certainty.

The most famous legacy of Bernoulli's book was Bernoulli's Theorem, which states that the probability that observed frequencies come close to a priori mathematical calculations of probabilities increases with number of trials (e.g., the probability that any one number of a six-sided die will come up one sixth of the time, which is the mathematical calculation of the probable outcome, increases as the number of rolls of the die increases). This principle, according to Bernoulli, promised to be a reliable new way of ascertaining more about the natural world:

If thus all events through all eternity could be repeated, by which we would go from probability to certainty, one would find that everything in the world happens from definite causes and according to definite rules, and that we would be forced to assume amongst the most apparently fortuitous things a certain necessity.

This theorem linked a priori theories with observations and implicitly endorsed the Baconian view that if nature operated by strict patterns (laws), then repeated observations and experiments would ultimately reveal such laws with certainty (6).

Bernoulli's main concern was an old one: How credible is a particular legal testimony?  When should we withhold or grant assent?  Bernoulli attempted a mathematical analysis of different legal proofs, taking into account a wide variety of factors (responses under questioning, murder weapons, witness reports, confessions, and so on). His method was based on the assumption that in rational decision making certainty was possible, so there must be a reliable mathematical way to ascertain the degree of certainty in any particular circumstance.

However, for obvious reasons having to do with the numerical values ascribed to particular factors, Bernoulli's method remained more an art than a science (relying on the practical judgment of the person carrying out the analysis). Nevertheless, his major purpose is a clear indication of just how optimistic some thinkers were about bringing complex human judgments under the clear authority of mathematics.

The application of this sort of mathematical analysis to human behaviour, for a "good sense reduced to a calculus," was associated with the growing faith in human nature as potentially reasonable and regular. In fact, to conduct one's affairs in accordance with probabilistic expectation became, in the 18th century, the mark of a reasonable person. Hence, it was natural for the philosophes, for example, to extend the "art of conjecture" into many facets of civilized society as part of their reform program in social and moral life.

 

Theology and Probability

In the late 17th century the development of natural theology and empirical science fostered the idea of the uncertainty of all human knowledge (the imperfections of sense experience were clearly an important limitation on the reliability of observations and experiments, as Bacon had announced). To save human belief from total skepticism without having to revert to the dogmatic deductive certainty of the medieval scholastics, certain thinkers (including Robert Boyle) developed the concept of inferior degrees of physical and moral certainty.

In this new definition of rationality, the proof of a hypothesis did not have to have the full rigour of a mathematical proof (like a theorem of Euclid). What now mattered was a particular level of certainty, a qualified belief, such that a reasonable person ("who hath but an ordinary capacity, and an honest mind; which are no other qualification than what are required to the institution of men, in all kinds of Arts and Sciences whatsoever") would accept it and act upon it in his or her daily life (7). The problem of faith thus became a matter not of firm proof but of sufficient evidence, in Boyle's words, "what degree of evidence may reasonably be though sufficient, to make the Christian religion thought fit to be embraced."

The best known example of this newly qualified reasonable belief is Pascal's Wager (1669), in which he argued that since God either existed or He did not, since the penalties for being wrong were much greater for non-belief, and since the rewards for being right were very much greater for belief, it was much more reasonable to believe in Christianity than to reject it.

Pascal's analysis here translated the decision to accept or reject Christianity into the language of decisions about business or games of chance (i.e., into probabilistic language). Since all human knowledge was in principle imperfect and since one had to act on the basis of inadequate knowledge, the rational thing to do was to maximize one's expectations in such a potentially hazardous circumstance. Given his assumptions about the nature of God, therefore, a faith based upon such a procedure, Pascal argued, was not against reason (8).

Versions of Pascal's argument had appeared earlier, notable in John Tillotson's sermon, "On the Wisdom of Being Religious" (1664).  Tillotson took atheists to task for their imprudence in risking their "eternal interest," as compared to the believer, who "ventures only the loss of his lusts." 

So that, if the arguments for and against a God were equal, and it were an even question, Whether there were one or not? Yet the hazard and danger is so infinitely unequal, that in point of prudence and interest every man were obliged to incline to the affirmative; and whatever doubts he might have about it, to chuse the safest side of the question, and to make that the principle to live by. For he that acts wisely, and is a thoroughly prudent man, will be provided against all events, and will take care to secure the main chance whatever happens.

Reasonableness and prudence here were matched to concepts of expectations, rendered in the mathematical terms of gambling (and hence universally clear to everyone). The aim, among other things, was to produce a ready consensus on all questions, even the most contentious, in a way that traditional rhetoric could not (9).

 

The St. Petersburg Paradox

The assumption here that reasonable behaviour (the conduct of the prudent intelligent person) was a universal standard which matched mathematical calculations of expectations was a widespread belief in the 18th century and persisted well into the 19th. The great French mathematician, the Marquis de Laplace, in his Philosophical Essay on Probabilities (1814), summed up the principle as follows:

It is seen in this essay that the theory of probabilities is at bottom only common sense reduced to calculus; it makes us appreciate with exactitude that which exact minds feel by a sort of instinct without being able ofttimes to give a reason for it.

The identification of mathematical probability with reasonable behaviour, however, led to problems when mathematical results did not fit good sense.

The famous St. Petersburg paradox, for example, imagined a game with infinite expectations. Two players, A and B wager over coin tosses. If heads does not turn up on the first toss, B gives A two ducats; if heads does not turn up on the first or second coin toss, B gives A four ducats; if heads does not turn upon the first, second, or third coin toss, B gives A eight ducats, and so on, until heads finally appears. Thus, A's winnings are 2n-1, where n is equal to the number of tosses it takes for the coin to land with heads on top for the first time. Now, what should A pay in order to take part in the game in a manner that is fair to B?

The problem stems from the expectations. Player A has large and potentially infinite expectations, since there is a possibility of an unbroken series of tosses which turn up tails. And yet no reasonable person in A's position would pay very much to take part in such a game. Hence, the standard mathematical analysis in terms of expectations does not match common sense reasonable behaviour (thus, the paradox).

This slight (although very well known) example illustrated a tension between equitable games and prudent behaviour. Equity would seem to demand a large stake from A (since the potential risk for B is high indeed), and yet, if A is reasonable, prudence would seem to insist upon a relatively small stake.

The significance of this paradox was that it posed an important problem for those trying to quantify the behaviour of the rational person, whose conduct might set a moral standard. Anything which threatened the close union of mathematical expectation and prudent behaviour obviously challenged the entire concept. Hence, a number of famous mathematicians addressed the question.

In his published resolution of this paradox (in 1738) Daniel Bernoulli (1700-1782) redefined the notion of expectation from a legal notion of equity to, in part, an economic one (10). In deciding to play or not (and how much to pay), Player A needed to balance his mathematical expectations against his specific financial circumstances, for the decision had to take into account the fact that the situations of a rich person with money to spare and of a poor person with very little money were not the same.  Thus, the calculus of common sense had to take into account, not just legal matters (like equity), but fiscal prudence, the moral utility arising from the player's particular situation.

Bernoulli argued that a poor person with a 50 percent chance of winning 20,000 ducats in the lottery would be foolish not to sell the ticket for 9,000 ducats. A rich person, by contrast, would be wise to purchase the ticket for the same amount. In other words, reasonable behaviour needed to include a "moral" expectation in order to get a full sense of the value of the outcomes.

The determination of the value of an item must not be based on its price, but rather on the utility it yields.

The key point here is that the paradox revealed that human behaviour, even the most prudent, contained hidden complexities, factors irrelevant perhaps in calculating simple outcomes (like dice games) but soon apparent in more complex situations demanding a moral judgment (11). A close look at the factors which one had to take into account in order to assess A's behaviour complicated the issue considerably.

Jean D'Alembert (1717-1783), the coeditor of the philosophes' major publication, the Encyclopédie, and the most important mathematician in the group, used the St. Petersburg paradox repeatedly to emphasize the importance of taking into account normal individual experience in dealing with the application of probability to human decision making. Too much reliance on the mathematics of expectations, he urged, led to problems like the famous paradox.

Thus, for D'Alembert, although analyzing probable expectations worked for comparing stakes with uncertain gains in some circumstances, there were equally good methods for that; moreover, the traditional methods of analysis failed to establish good precepts for risk taking in all ventures.

D'Alembert took issue with any system which gave equal weight to the value of the outcome and the probability of the expectation. In his view, the value of the outcome should be subordinated to the probability, for if the probability for a successful outcome was extremely small, the expectation, no matter how large, would not justify a prudent person's taking the risk (an important argument, then and now, in making a case against participation in lotteries).

 

Smallpox

One urgent social issue which prompted much argument about probability, especially in D'Alembert's circle in France from 1750 to 1770, was inoculation against smallpox.. The procedure was not risk free. It involved impregnating the skin with live smallpox pustules and could lead to death. At the same time, however, over 10 percent of the populations of London and Paris were being killed by the disease.

For the Parisian the choice was approximately as follows: a 1 in 7 longer-term chance of dying of smallpox versus a 1 in 200 short-term chance of dying from the inoculation. Complicating the issue was the generally small average life expectancy in Paris. In addition, conservative opinion in France, including the Faculty of Medicine and the Faculty of Theology at the University of Paris, had come out strongly against inoculation.

Daniel Bernoulli applied his probability formulae for lotteries to the problem of smallpox. He calculated the number of persons likely to be killed by smallpox in a given time. Then, he calculated the gain in life expectancy from inoculation for any given age. His formulas were necessarily rather crude and the statistics of mortality inadequate, but his results definitely favoured inoculation (12).

D'Alembert supported inoculation, but took strong issue with Bernoulli's mathematical analysis.  He pointed out that it was based on an uncertain method for estimating life expectancies and on the assumption that the risk of dying from smallpox did not vary with age.  In addition, D'Alembert criticized conventional expectation methods basic to Bernoulli's analysis for not taking into account moral and psychological questions concerning quality of life issues.  For example, is a risk at the prime of life worth extra years in old age when one is less capable of enjoying what life has to offer?

His analysis of Bernoulli's methods led D'Alembert seriously to question whether individual social behaviour could ever match mathematical analysis of it in the way the probabilists firmly believed. The psychological factors and the individual circumstances were so varied in different people, who perceived risk in so many different ways, that the chances of reaching some mathematical solution to risk taking appeared to him increasingly remote. Hence, for D'Alembert, the calculus of probabilities was primarily descriptive of the common psychology of risk and not prescriptive. The advantages of making the correct decision about inoculation were important, but they "were not of right nature to be appreciated mathematically."

 

Risk Taking

In spite of D'Alembert's pessimism about measuring and comparing the individual's perceptions of risk, other 18th century mathematicians had faith in the possibility of statistical approaches to the concept, systems which would include subjective elements. They assumed that some minimum unit of risk-taking in reasonable people could be isolated and quantified and that there would be a similarity between the psychological factors involved in statistically similar risks.

For example, Georges Buffon (1707-1783), the great French naturalist, attempted to describe a basic unit of perceived risk taking in the notion of death: "all fear or hope, whose probability equals that which produces the fear of death, in the moral realm may be taken as unity against which all other fears are to be measured." With such a unit of moral feeling, Buffon sought, among other things, to produce a persuasive mathematical argument against gambling, "a misconceived pact, a contract disadvantageous to both parties."

In his resolution of the St. Petersburg Paradox, Buffon followed Bernoulli in distinguishing between the quantity and the utility of the expectation (money) and introduced his own notions that the satisfaction that money can purchase depended on what one was used to in pursuing satisfaction (what we need depends upon our individual patterns of spending). Thus moral problems about decisions involving risk must be assessed by the quantifiable units of fear and hope (which arise out of our habits) which the problems create in reasonable people (13).

Other 18th century thinkers (notably Condorcet and Laplace) continued to defend with variations the traditional view that the calculus of probabilities could not only describe but also systematize reasonableness.  Once this had been successfully accomplished, they believed, the results could be communicated beyond teh elite of the mathematically educated to the people at large, thus achieving an educationally valuable social consensus on moral, economic, and legal decision making, all under the authoritative banner of mathematics.

The attempt to base the concept of the reasonable individual on mathematical probabilities of expectation, however, ultimately failed, since the reasonable individual turned out upon close inspection of all the factors involved to be too complex and elusive a concept to fit mathematical measurement. The more one considered the individual, no matter how prudent, the more variables one had to add to the calculations.

The early history of probabilistic theory was clearly part of the total Enlightenment attempt (in retrospect very optimistic, although one we have clearly not yet totally abandoned) to bring all spheres of human interaction under rational (mathematical) principles, in general adjusting the understanding of mathematics to fit the shifting definitions of good sense and using the criterion of the rational person as a standard for decision making.

The continuing difficulties of this approach, however, gradually gave rise to a shift away from the concept of reasonable people, "known for their experience and wisdom in the conduct of their affairs," to something quite different, the concept of people en masse considered quantitatively. With this shift, mathematical probability moved away from jurisprudence and economic activity of particular individuals towards the sociology of people in general (i.e., in large groups).

 

Statistics and Mortality

The keeping of official numerical records of various social matters was a practice as old as the first tax rolls, calculations of harvest yields, or numerical analysis of the population. Government authorities had always been vitally interested in record keeping of this sort as, among other things, a means of calculating and recording revenues. Traditionally, however, such record keeping had generally been erratic, limited in scope, and often very inaccurate.

One important influence fostering attention to mathematical studies of society in the 17th and 18th centuries was undoubtedly the growing centralized power of the nation states. For as the responsibilities and the power of the central government increased and as the population started to grow and to move, it became more and more important to have descriptive records, something much more reliable and regular than the traditionally incomplete and often poorly preserved records of births, deaths, land holdings, and so on.

Thus, throughout the 18th century, the study of statistics gained in importance. In fact, the term statistics seems to have appeared in the mid-18th century to designate collections of data directly relevant to government. Demographic data existed (often erratically) from the early 16th century, but the early recorders appear not to have shown much interest in seeking out regularities.

The first systematic study of the relationship between death and age was the work of John Graunt (1620-1674), an English haberdasher, whose curiosity led him to analyze death records in English cities. His Natural and Political Observations on the Bills of Mortality (1662) was undertaken in the spirit of Bacon's empirical method and designed to provide information on public issues: How many able-bodied men in London could do military service? Was polygamy a reasonable way to increase the birth rate?

Graunt observed a surprising regularity in figures which he assumed would display more chance distribution. On the basis of these regularities, Graunt addressed some important public issues. For example, the commonly discovered higher frequency of male births compared to female births led Graunt to conclude that, given that men were more likely to die from occupational hazards and war, the number of marriageable women roughly equalled the number of marriageable men (hence, polygamy was not reasonable).

Graunt's method, which provided figures for the number of survivors per decade, drew no significant probability conclusions from the study, but mathematicians were quick to use Graunt's data in applying theories of probability to life expectancy. Graunt's immediate successor in England was his contemporary William Petty (1623-1685), professor of anatomy at Oxford and a professor of music at Gresham College. His Political Arithmetic (1676) stressed the importance of a new method of quantitative measurement (statistics), a method to which he gave the name Political Arithmetic, "the art of reasoning by figures upon things relating to the government."

Graunt's results from a study of birth statistics which revealed that more males were born than females were often used in defenses of the Argument from Design. For instance, the famous physician John Arbuthnot (1667-1735) claimed in his "An Argument for Divine Providence" (1710) that "Art, not Chance" affects the determination of the sex ratio. How else could one account for the consistent numerical discrepancy between male and female births (14)?    

On the Continent, Ludwig Huygens, with a knowledge of Graunt's work, established a set 18th century method for computing life expectancies. He multiplied the number of people in each of Graunt's decades by the average number of years they survived and then divided by the total number of people. On the basis of these calculations, mathematicians could offer advice on the pricing of annuities and life insurance contracts. However, the practical effects of the new mathematical interest were slow to manifest themselves, so that jurists and business people tended to trust more to experience and tradition than to the new insights into mathematically computed probabilities.

On the face of it, this reluctance to use the probability theory was odd, for the competition in capitalist circles for customers was fierce, as entrepreneurs (projectors) of every description thought up all sorts of schemes to solicit money from would-be risk takers. In the 18th century one could take out insurance against a loss of chastity, venereal disease, losing a lottery, and many other rather dubious matters. In many cases, one could purchase an insurance agreement at a London coffee house.

 

Lotteries

At the end of the 17th century, Europe was infected with a craze for lotteries, and throughout the 18th century governments regularly (in Britain annually from 1699 on) used lotteries to raise money for special projects (e.g., to build a bridge over the Thames, to house the collections in the British Museum). And there was constant competition from private ventures.

There was apparently little attempt to construct or to analyze these schemes according to probability. The most popular mathematical studies of lotteries were those by numerologists. What mattered was the large prize, which could, in an instant, resolve problems of debt and of an inferior social standing. And for that success one didn't need to know about probabilities; one needed the assistance of the traditional guarantors of winning: luck or fortune.

Attacking the notions of luck or fortune as a means of controlling the mania for gambling brought the philosophes and the churches (Protestant and Catholic) into alliance and led to the production of probability studies to illustrate the unreasonableness of gambling. These efforts had little success. Gamblers, it appeared, then and now, had little concern for mathematical calculations of their enlightened self-interest. They were motivated by the richness of the expectation rather than the probabilities of success. Hence, gambling became something of a paradox for those who wished to encourage people to guide their conduct by mathematical reasoning. Why, when mathematics so clearly demonstrated the irrationality of gambling, did so many people eagerly participate in an activity which did them harm?

Eventually it took legislation to abate the craze for lotteries. State lotteries were abolished in Britain in 1826 and in France in 1836. They remained illegal until modern times.

 

Life Insurance

The same speculative frenzy revealed by lotteries appeared also in life insurance, which, because of its haphazard and often fraudulent nature, was illegal in almost every European country except England. Once again, this activity at first proceeded without detailed interest in probability (in spite of the growing evidence about regular correlations between death and age).

It is important to remember that the general resistance to the mathematical laws of probability, at least so far as life expectancy was concerned, was a result of the total unfamiliarity of people with statistical thinking, together with the lack of a reliable source of statistical information. Throughout the 18th century, most probabilists may have believed that moral and legal questions (like court-room judgments) could be regularized by mathematics, but few, if any, believed that accidents (like fires or shipwrecks or death) could be subsumed under mathematical laws. We, of course, now believe the reverse, exempting (for the most part) individual moral and legal judgments, but including general data on accidents.

Thus, for most people death was much closer to the result of a game of chance than to anything possessing mathematical regularity. In most people's immediate experience, death came in many different forms, struck all ages (usually unexpectedly), and was attended by very particular, often apparently fortuitous, circumstances. How would anyone claim that this phenomenon was governed by some mathematical certainty?

Given this understanding of death, one can easily see how many people for a long time associated life insurance with gambling. The two appeared equally risky, equally subject to the unpredictable forces of chance or fortune. Hence, there was little to no attempt to fix premiums by any mathematical formula. Here again, experience provided the answers to any questions about premiums and lump sum payouts.

The first attempt to apply mathematical probability to life insurance premiums was the Society for Equitable Assurance on Lives and Survivorship in 1762, a company organized largely under the influence of James Dodson (1710-1757), a Fellow of the Royal Society, Master of the Royal Mathematical School, and author of a number of mathematical books.

The Equitable's attempt to get a royal charter to set up business was twice rejected, in part because the Privy Council thought that the premiums were much too low (in fact, they were excessively high). So the company turned to the public sector, announcing its new program, which stressed the regular principles at work in establishing the premiums, which were "grounded upon the expectancy of the continuance of life; which, although the lives of men separately taken, are uncertain, yet in an aggregate of lives is reducible to a certainty."

Life insurance offered an excellent example for mathematicians wanting to defend the utility of probability theory. Those who did not believe in the existence of statistical regularities in events apparently and traditionally considered accidental had only to reflect upon the enormous and increasing profits of Equitable the other life insurance companies which had adopted precise mathematical procedures in the calculation of premiums.

 

The Death of the Concept of the Rational Person

With evidence of this sort, by early 19th century, when gambling had been legally distinguished from insurance, the mathematical understanding of risk had won the day. The regularities among large samples over longer periods of time confirmed a strict determinism, and general rules based on probabilities began to replace judgments of particulars in individual cases. The life insurance companies even stopped interviewing candidates for policies.

The success of this new form of statistical thinking had effects far beyond life insurance. For it appeared to offer an alternative to the increasingly difficult problem of understanding human conduct mathematically (that is, pursuing the Enlightenment dream of discovering natural laws upon which we could base the conduct of the human individual).

For so long as the understanding of subjective probability (e.g., in moral behaviour) was based upon the expectations of the rational individual, one had to keep adding variables (as problems like the St. Petersburg Paradox illustrated) in order to account for the discrepancies between the mathematics and human behaviour, until, as D'Alembert concluded, the complexities made the link between mathematics and moral behaviour of the individual impossible to ascertain.

What successful mathematically based life insurance demonstrated, however, was that one did not need to take into account individual behaviour. What mattered was the information revealed in the study of very large groups of people over time. "All nature," declared the French mathematician Simeon Poisson (1781-1840), "is subject to the universal law which we call the law of large numbers . . . [which] is a general and incontrovertible fact."

Individuals, in other words, might be as complex, apparently irrational, and unpredictable as they pleased. In large groups there was a statistical regularity to the events of their lives considered collectively. Even the frequency of madness was predictable. Inevitably, then, the growth of mathematical probability began seriously to affect how people thought about themselves and the lives of their neighbours.

For example, an appreciation for the new probability required a different (and for many an unwelcome) form of thinking about death or, for that matter, any catastrophe. An individual death, no matter how special or unusual or unexpected, counted equally with every other death, no more, no less, as a statistic and could be accommodated in an overall scheme in which any particular death counted for very little. The characteristics of the deaths among a large group remained remarkably stable. To follow this form of thinking, one had to abandon the ideas about the uniqueness of particular deaths and learn to think more comprehensively about deaths in general in large populations over long periods of time.

A proper understanding of human life, therefore, had to move away from the concept of the small, unique community, united by personal relationships on a traditional model, and towards the idea of huge anonymous population governed by general rules which transcended the particularity of any individual experience. Well before there was anything like a routinely detailed census, therefore, or any regular reliable statistical analysis of populations (other than mortality rates in the big cities) the pressure to identify oneself with a society at large rather than with a local community intensified (15).

The growing explanatory power of probability applied to large populations had a significant effect on the understanding of social order, too.  For mathematics appeared to demonstrate a regularity in the larger order of society, even though individual cases might appear quite irrational.  Hence, it seemed that individual regularity was no longer a precondition for collective order.  Criminal activity, in itself, was no necessary sign of an unusual problem  The issue now was the observed frequency of criminal behaviour in society in general.  Not surprisingly, this trend led to the decline of the concept of the reasonable person as the criterion for explanations of moral behaviour.  Attention now shifted to the statistically quantifiable behaviour of the entire society, the trend from which modern subject of sociology was created.

Nor was this development without significance for the Argument from Design. Some thinkers, as we have seen, took the statistical regularities in the general population as evidence for the operation of a divine intelligence. Others, however, argued that mathematical order was itself quite natural and that the only place to expect Divine intelligence would be in some aberration, which would not occur if the sample were large enough and the if one understood the mathematical laws of error. Thus, the machine no longer required a divine mechanic to keep it going or a divine creator. For all the individual irregularities in particular parts, the entire mechanism obeyed firm mathematical laws.

 

The End of the Concept of the Mathematically Credible Judgment

Associated with these objective studies of probability applied to population statistics, as we have seen, throughout the 18th and 19th centuries was a continuing concern for subjective probabilities, the attempts to link the moral behaviour of individuals to mathematics.

As mentioned above, the central arenas of this debate were the law courts and interpretations of history and scripture. What could mathematics reveal about the credibility of witnesses, of expert and ignorant testimony, and of the reliability of judges? Such a concern would appear to deny the freedom of choice of the individual. But mainstream 18th century psychology linked objective and subjective probabilities together so that, to some extent, it was believed, one could reliably analyze truthfulness with the mathematics of probability, and number of eminent mathematicians and philosophers discussed various ways to assess reliability (e.g., Leibniz, Locke, Hume, Poisson, Condorcet, Laplace, and others) (16).

Leading this cause in France was the Marquis de Condorcet (1743-1794), who in his Essay on the Application of Probability Analysis to Decisions Arrived at by Plurality of Votes (1785) calculated the fractions of individual liberty which a citizen traded in for the privileges of living in a community. In a just society, Condorcet argued, citizens should run no greater risk in the law courts from unfair conviction that they should if they were to "voluntarily expose themselves without any preformed habit, for an interest so slight that it could not be compared to one's life, and without requiring any courage." The application of law in society, in other words, should be like a fair and safe game, with clearly calculated mathematical expectations (17).

Condorcet also analyzed the probabilities of a tribunal's reaching a correct verdict, and used his results to discuss the qualifications of judges, the best form of a jury for a fair trial, and the proper nature of ruling assemblies.  His analysis of the probability of any judge's reaching a correct decision in any given case led him to conclude that large assemblies were only appropriate where all people were equally ignorant.  STrictly on the basis of mathematical probability, Condorcet argued, "a pure democracy would not be suited even to a far more enlightened people, far more exempt from prejudice than any known to us in history."

Moreover, Condorcet's mathematical studies suggested to him that "The form of assemblies which decide the lot of men is far less important for their happiness than the enlightenment of those who compose it; and the progress of reason will contribute more to the good of the People than the form of political constitutions." In other words, the rule of an enlightened elite class was mathematically (and therefore morally) better than large democratic assemblies.

Condorcet's methods encouraged moral determinists to propose relatively simple equations to cover complex human interactions. For example, Laplace derived a formula for the likelihood that all judges would concur in a correct decision. His formula read as follows: (Vn)/[Vn + (1 -V)n], where n represented the number of judges and V the independent probabilities for each judge (the latter calculated by a separate method) (18).

This method, however, for all the interest to mathematicians and the keen desire to use the probability theories in the cause of social reform, aroused considerable opposition and ridicule:

Does one need anything more than good sense in order to find supremely ridiculous the application to which Condorcet, a modern scientist, put mathematical calculations to moral probabilities, calculations which he substituted, with a gravity as incomprehensible as it was indefatigable, and in a quarto volume thick with algebra, for judicial proofs, written and testimonial, the only ones admitted by all the tribunals of the World, by the good sense of all nations.

By the mid-19th century the attempt to moralize mathematics had largely fizzled out, the interest of probabilists having shifted from questions of individual judgment to problems of computed averages in large populations (19).

Thus, in the 19th century, the reasonable judge of such questions was no longer the prudent or the average or the mathematically skilled person, but rather, first and foremost, the individual with an intuitive, sensitive, and synthetic imagination (i.e., a strong intelligent character).  And with this development came the call to banish mathematics from the moral realm: the attempts of later probabilists were only "repeating the fancy, in heavy algebraic language, without adding anything new, abusing the credit which justly belongs to the true mathematical spirit." (20).

The disappearance of the moral model of the rational person whose decision making could be reduced to a method of mathematical calculation which would compel agreement marked the end of the two-hundred-year quest for moral certainty through probability theory applied to the individual.  Hereafter, whatever reasonable behaviour might mean and however a prudent person made decisions, human moral activity was going to have to be something other than mere mathematical computation.  And any application of mathematics to human behaviour was going to have to consider, not the individual, but the large group.

 

The Concept of the Average Person

The use of the new statistics in astronomy by France's greatest mathematician in the late 18th century, the Marquis de Laplace (1749-1827) stressed the importance of statistical regularities, the reliable average results of constantly repeated observations, because these were the indications of constant causes in nature. In making astronomical measurements, one could reduce particular phenomena, through statistical theory, to constant causes. With this method, Laplace was able to "perfect" Newton's astronomy by locating the origin of particular causes of disturbance (for example, in the mutual interference caused by the gravitational forces of the planets). Any anomalies in the model were not the product of chance, but invitations to investigate the full operation of constant causes (21).

As a product of the Enlightenment, Laplace saw no reason why this form of reasoning should not be extended to all human endeavour.  Any large number of similar observations would reveal the universally operative constant causes in social and individual and physical events.  The existence of some apparent error, in society or in astronomy, according to Laplace, meant that one investigated how the constant causes could account for the discrepancy or what corrigible error had taken place in calculations (22).

The growing interest in statistical studies of average in the early 19th century culminated in the work of Adolphe Quetelet, director of the Brussels Observatory, who produced a theory of what he called "social physics." Quetelet eagerly took up Laplace's challenge for a moral science based upon Newtonian-Laplacian physics.

Like Laplace, Quetelet, as an astronomer, was mostly concerned with averages from a large number of observations. In his book On Man and the Development of His Faculties (1835) he showed that the rates of crimes, suicides, marriages, and other moral concerns stayed remarkably constant from one year to the next. This led Quetelet to a melancholy reflection:

Sad condition of the human species! The toll of the prisons, of the chains and of the scaffold seems fixed for it with as much probability as the revenue of the state. We are able to enumerate in advance, how many individuals will stain their hands with the blood of their fellows, how many will be forgers, how many prisoners, nearly as one is able to enumerate beforehand the births and deaths which must take place.

From his study of population statistics, Quetelet derived the concept of the "average man." Instead of focusing upon the rational individual or on particular individuals, Quetelet appealed, with the help of Bernoulli's Theorem, to the concept of the general facts revealed by averages:

The greater number of individuals observed, the more do individual peculiarities, whether physical or moral, become effaced, and leave in a prominent point of view the general facts, by virtue of which society exists and its importance is preserved.

While conceding that his central concern, the "average man," was a fiction, Quetelet nevertheless made the average person the criterion by which society should be analyzed. What affected the average was an important social issue; what did not had a lesser importance.

Departures from the average Quetelet analyzed by his law of error (distinguishing constant from accidental causes), but he was not led into any study of the systematic distribution of error, one key concept which separates modern statistics from this classical period (23). Like Laplace, Quetelet's concern was always with the concrete reality of the average.  Errors existed to be explained away.

Given the breadth of his statistical interests, which ranged over all classes in society, Quetelet's work marked a distinctive shift away from the 18th century concern for using the prudent individual as a criterion for developing mathematical rules for reasonable decision making. Quetelet's work interpreted society as a large aggregate of all individuals, rational and irrational, rich and poor, educated and ignorant. All classes and types were included in his statistical analysis of social behaviour.

What mattered in this new view was the big picture, the macroscopic regularities in the social processes. As Daston observes, with this shift moral science became social science. One could now afford to neglect the individual who had "little or no force on the mass" (the Newtonian echo in the language is a nice reminder of the intellectual debt Quetelet acknowledged to classical astronomy).

 

Post-script: The Non-Average Person

One of the most significant developments in the history of statistics and traditionally the event that marks the transition from classical to modern theories occurred in December 1888, when Francis Galton (1822-1911), an English investigator into heredity and a cousin of Charles Darwin, published his paper "Co-relations and Their Measurements, Chiefly from Anthropometric Data." What made this moment significant was that it signaled a turn away from Quetelet's averages towards the systematic study of differences or departures away from the average.

This interest was fuelled by Galton's long interest in the hereditability of intelligence.

I have no patience with the hypothesis occasionally expressed, and often implied, especially in tales written to teach children to be good, that babies are born pretty much alike, and that the sole agencies in creating differences between boy and boy, and man and man, are steady application and moral effort. It is in the most unqualified manner that I object to pretensions of natural equality.

In pursuit of this view, in the 1860's Galton undertook a study of the heritability of genius, at first with very crude empirical methods (like consulting biographical dictionaries). His turn to biology was, in part, a search for a more secure basis for the study of genius.

Once Galton had familiarized himself with the laws of error, he could in Hereditary Genius (1869) produce a scale of ability expressed in units. Measurement of ability was possible in terms of what previous mathematicians had called the "probable error" but which Galton generally preferred to call "probable deviation" or "the law of deviation" (24):

The law of deviation from an average . . . shows that the number per million whose heights range . . . between any . . . limits we please to name can be predicted from the previous datum of the average.

This measurement enabled Galton to propose as a unit for measuring ability the "standard deviation" or the "statistical unit."

From here Galton went on to propose that one could set up similar scales for any quality which one assumed to be distributed according to the error laws (e.g., every inheritable quality), so that the statistical unit (or standard deviation) became a measurement of biological variation throughout an entire population. This brought Galton to what he considered his most fundamental insight: that the laws of heredity must be concerned with departures from the average measured in standard deviations.

Beyond this, Galton's explorations of heredity led him to consider what fraction of the parental deviations from the average might be passed onto the children (e.g., How likely was it that parents of high intellectual ability would have children of above average ability?). These considerations took Galton in his comparative studies of the heights of parents and children to discover the mathematical properties of correlation, the precise method of establishing the statistical links between parents and children concerning inherited characteristics (the subject of the 1888 paper).

Galton's achievement, in at least one view, could be attributed to the fact that he was not a physical scientist and could thus liberate himself from the approach which interpreted deviations from the average solely as the result of error in a system governed by constant cause or the result of unknown causes. As Galton himself observed:

The primary objects of the Gaussian Law of Error were exactly opposed, in one sense, to those to which I applied them. They were to get rid of, or to provide a just allowance for errors. But these errors or deviations were the very things I wanted to preserve and to know about.

With his primary interest in biology (rather than in astronomy or physics), Galton was more concerned with variety and individual differences. Quetelet and Laplace, coming from the astronomical tradition, sought uniformity. In the end, biology rather than physics inspired a use of statistics and probabilities more practical for the study of human beings.

 


 

Notes to Section Four

(1) For the material on the history of probability in this section I am relying heavily on Daston.  Much of the content here is merely an inadequately short summary of parts of her excellent account. [Back to text]

(2) The latter method provides the same answer, but the reasoning is different.  The calculation is as follows: A has a 50 percent chance of winning all the money on the next play and a 25 percent chance of winning on the next two plays.  A's probable winnings, therefore are (.5 x 64) + (.25 x 64) or 32 + 16 or 48 pistoles.  Alternatively, B has a 0 percent chance of winning on the next play and a 25 percent chance of winning on the next two plays.  Therefore, B's probable winnings are (0 x 64) + (.25 x 64) or 0 + 16 or 16 pistoles.  Fremat's solution (which has not survived in the correspondence) seems to have involved listing all the possible outcomes, an approach which Pascal rejected as unreliable and impractical.  [Back to text]

(3)  One of Pascal's most famous contributions to probability theory (not, however, something original to him) was Pascal's triangle, an arrangement of numbers, the top of which looks like this:

 

1

1

1

2

1

1

3

3

1

1

4

6

4

1

1

5

10

10

5

1

1

6

15

20

15

6

1

 

The triangle can be constructed with many more rows. Each number in the triangle (except for the 1's on the edges) is the sum of the two numbers above it (to the right and left obliquely). This triangle lends itself to a number of probability circumstances. For instance, each horizontal row represents the number of ways of choosing a number of elements equal to the number of the row. In the fourth row, for instance, are the numbers 1, 4, 6, 4, 1. These indicate the number of ways of choosing 0, 1, 2, 3, and 4 elements. The first number (1) indicates the number of ways of choosing 0 of the elements (i.e., don't choose any). The second number (4) indicates the number of ways of choosing 1 of the four elements (there are four possible choices). The third number, 6, indicates the number of ways of choosing 2 out of the 4 items. And so on. The same procedure works for every row. There are a number of other probability uses of this triangle (e.g., the probability of answering correctly answers to a multiple choice quiz). For further details see Paulos 171, Boyer 364, or David, Chapter 9. David also has an interesting section on the Pascal-Fermat exchange (85 ff). [Back to text]

(4)  For example, Huygens studied the expectations in the following game: Two players (A and B) roll two dice with the following rule: if a 7 is thrown, Player A wins amount x, if a 10 is thrown Player B wins the amount x. If any other number is thrown, the players divide the amount x equally. What is the expectation of each player? By Huygens's calculations, Player A's expectation is 13x/24, and Player B's expectation is 11x/24. Thus, the game is not fair, for the expectations are not equal. [Back to text]

(5)  According to Bernoulli, both an eclipse and a throw of the dice were equally necessary.  The only difference was the relative completeness of our knowledge about dice and about the solar system.  There were,. thus, no truly random events.  Those which appeared random were simply those about which we did not yet understand enough.  An ignorant person might well gamble on an eclipse. [Back to text]

(6) Bernoulli's Theorem was refined considerably by Abraham de Moivre, an English mathematician, in 1733.  De Moivre's calculations significantly reduced the number of trials necessary to show that the probability fell within a given interval and made an even stronger claim in defence of induction against skepticism.  Bernoulli's Theorem indicated that chance irregularities need not hinder the scientist from observing the natural order of hidden causes.  De Moivre adapted this to make the stronger claim that, as Daston observes, "Not only could regular causes be expected to ultimately produce regular effects in the long run; observation of effects would disclose those causes with any desired degree of probability to the persevering observer" (253).  Thus, in De Moivre' view, "in all cases it will be found, that although chance produces irregularities, still the odds will be infinitely great, that in process of time, those irregularities will bear no proportion to the recurrency of that order which naturally results from original design."  De Moivre is also credited with the mathematical principle that the probability of a compound event is the product of the probabilities of its components.  For example, given the six letters A, B, C, D, E, and F to be arranged at random, the probability of D coming first is 1/6, and in the same test the chance of one of the remaining letters coming second is 1/5.  Hence the probability in a random sort of ending up with the first two letters being D and B respectively is (1/6) x (1/5) or (1/30).  For a look at the efforts to make probability the guarantee or the truth of induction see Daston, Chapter 5.  [Back to text]

(7) John Wilkins, Of the Principles and Duties of Natural Religion, Fourth Edition (1699). [Back to text]

(8) A similar principle in modern decision making is the Maximin strategy, often identified as a central plank in conservative thinking: given the choice of a number of possible decisions with different outcomes, the reasonable person selects the one the worst possible outcome of which is preferable to the worst possible outcome of any of the others.  [Back to text]

(9) Leibniz foresaw an era when all argument would be settled by mathematical analysis.  "Let us calculate, Sir!" would be the challenge issued to all disputants.  Modern risk analysis is a continuation of this tradition.  [Back to text]

(10) The Bernoulli family, one of the most famous in the history of mathematics, was (for our purposes) made up of two brothers, Jakob (1654-1705) and Johann (1667-1748) and the two sons of Johann, Nicolaus and Daniel. Among their other accomplishments in mathematics, Jakob was the author of The Art of Conjecture, Nicolaus proposed the St. Petersburg Paradox, Daniel offered a solution to the paradox. About a dozen members of the family were distinguished mathematicians. Four of them gained election to the Académie des Sciences. For a family tree, see Boyer 416. [Back to text]

(11) Buffon tested the St. Petersburg Paradox empirically and found that in 2084 games, B would have paid A 10,057 ducats, so that A's expectations in any one game, rather than being infinite, were less than 5 ducats. [Back to text]

(12) Inoculation was by mid-century a leading issue in the philosophes' call for social reform. Being inoculated was even a sign of one's emancipated and progressive views. [Back to text]

(13) Buffon's best known contribution to the history of probability is his famous needle problem, in his words, "I assume that in a room the floor of which is merely divided by parallel lines, a stick is thrown upwards and one of the players bets the stick will not intersect any of the parallels on the floor, whereas on the contrary the other bets the stick will intersect some one of these lines; it is required to find the chances of the two players. It is possible to play this game with a sewing needle or a headless pin." If one takes the length between the parallel lines on the floor as 1 unit, if one expresses the length of the stick (L) in terms of that unit of distance, and if the projectile's length (L) is less than this 1 unit, then the probability that the stick will intersect with a line is given by the equation P = 2L/pi. Given a stick and a room with parallel lines on it, then, a very large number of trials will provide a reasonably accurate empirical measurement of pi. To try this experiment electronically on line, go to the following site: Buffon's Needle Experiment.  From this page one can access a full explanation of the problem. For another mathematical analysis of the Buffon's Needle Problem, see Tuckwell, 16-19. [Back to text]

(14) The use of statistics to keep alive the Argument from Design persisted well into the 19th century, at least to judge from the following comments by Florence Nightingale: "The true foundation of theology is to ascertain the character of God. It is by the aid of Statistics that law in the social sphere can be ascertained and codified, and certain aspects of the character of God thereby revealed. The study of statistics is thus a religious service." [Back to text]

(15) This is not to suggest that the new statistics created this form of thinking, which, in a sense, was becoming an inevitable part of daily life as the traditional communities disappeared in the transformations of villages into huge municipalities and as the major shifts of population occurred. The growth of interest in statistical understanding was itself, in large part, one more response to this demographic revolution. [Back to text]

(16) Assessing the reliability of miracles, for example, drew discussions from Locke and Hume; Voltaire, like many others, was concerned to demonstrate mathematically the falsity of folk tales and a good deal of ancient history (e.g., Herodotus). Even today, we still find the notion of complex moral judgments being delivered with mathematical precision attractive, to judge from a hoax after the O. J. Simpson trial, when there was a very popular news story that a computer based study, the Solomon Project, had numerically analyzed all the data and come up with a guilty verdict. And, of course, the mathematical analysis of risk in decision making is still an important element in business training and science. [Back to text]

(17) Condorcet's analysis led him to conclude that the citizen sacrificed 1/144,768 of his liberty as the cost of belonging to a just society. [Back to text]

(18) Often a matter of public policy was at stake, as this comment of Poisson indicates: "In this important question of humanity and public order, nothing can replace the analytic formulas which express these various probabilities. Without their help, whether it is a question of changing the number of jurors, or comparing two countries with different numbers [of jurors], how will we know that a jury composed of twelve people, and judging by at least a majority of eight votes to four, offers a greater guarantee to both defendants and society, than another jury composed of nine people, for example, taken from the same list as before, and judging at such and such a majority" (1835). [Back to text]

(19) A central focus for many of the arguments about these mathematical procedures was the most famous trial in the late 18th century France, the case of the Calas family (1761), in which a Hugenot (Protestant) merchant Jean Calas was accused and convicted of killing his son in a quarrel over religious doctrine. The elder Calas was tortured (broken on the wheel), without confessing, and finally strangled. Voltaire, then at the height of his fame, used the case in his unremitting attack on the church. For his viewpoint the case was perfect: it enabled him to ridicule the superstition of the Protestant Calas family and, at the same time, to criticize scathingly the Roman Catholic Church's brutal judicial methods in the trial. Part of his stinging critique aimed at the traditional mathematical methods (inherited from the middle ages) for assessing the credibility of the witnesses. Voltaire's campaign, almost single handedly, led to a reversal of the conviction. This came too late to save Jean Calas, but it did rehabilitate the family. Inevitably, too, it helped to discredit the use of any mathematical methods (old or new) in such matters. In England one of the leading spokespersons against using probabilities in this way was John Stuart Mill, who, not surprisingly, found the principles offensive to his liberal beliefs. In his System of Logic (1843) Mill took strong issue with Laplace and Poisson for trying to apply probability theory "to things of which we are completely ignorant" [Back to text]

(20) The quote comes from Auguste Comte, French philosopher and founder of Positivism; he, as Daston observes "contended that the complexity of biological and a fortiori social phenomena precluded the application of mathematical methods to such subjects." [Back to text]

(21) In retrospect, it is interesting that, by and large, attention to the probability of causes did not preoccupy natural scientists nearly so much as it did those pursuing moral questions. Daston (294) suggests that the probabilists' interest in understanding cause and effect in terms of replication ("How many times must this effect happen for it to be reasonable to expect it the next time?") was in the 19th century of less interest to natural science because the most eagerly pursued studies in science at that time were in chemistry and electricity, studies in which the same effects always occurred. Hence, the physical scientists, with the important exception of the astronomers, for whom achieving the utmost accuracy from imperfect measurements was a central concern, in the 19th century largely ignored the probability of causes. [Back to text]

(22) A famous example of this method was Laplace's discovery that between 1745 and 1784, the ratio of births of boys to girls was 22 to 21 in France as a whole and 25 to 24 in Paris. This, Laplace reckoned, was very unlikely to be the work of chance (he calculated the odds at 238 to 1). Hence, he investigated the problem and discovered that if one took into account the children who were sent to the Hospital for Foundlings (i.e., abandoned by their parents and thus not recorded in the statistics), then the ratios for Paris were the same as for the rest of the country. See Hilts 210-211. [Back to text]

(23) The concept of the laws of error had been developed notably by the great German mathematician Carl Friedrich Gauss (1777-1855) to deal with the difficulty that all astronomical observations and measurements were subject to some error. Laplace, too, had devoted much effort to a theory of errors, arguing that "the superposition of elementary errors can, under reasonable conditions, lead to a universal law governing the addition of a large number of chance events" (Talon 75). [Back to text]

(24) Hilts explains Galton's achievement here as follows: "Galton was able to pass from statements like 'Bach had the musical ability of one in many millions' to statements like 'persons possessing ability in excess of four times the probable error from the average are as uncommon as one in several millions'" (224). [Back to text]

 

 


[Back to johnstonia Home Page]
Page loads on johnstonia web files

View Stats