Leonard J Savage: Foundations of Statistics

The International Congress of Mathematicians was held in Edinburgh from 14 August to 21 August 1958. The President was W V D Hodge who was also Chairman of the Executive Committee. The Vice-Chairman of the Executive Committee was A C Aitken and the Scientific Programme Committee was M H A Newman (Chairman), A C Aitken, M S Bartlett, M L Cartwright, J L B Cooper, H Davenport, P Hall, N Kemmer, M J Lighthill, E A Maxwell, D G Northcott, W W Rogosinski, A G Walker, J H C Whitehead, M V Wilkes and J A Green.

L J Savage gave a 30 minute invited address to Section VI of the Congress. We give the text of his lecture below:-

Recent Tendencies in the Foundations of Statistics

By Leonard J Savage

1. Introduction

This is an expository talk directed mainly at any non-statisticians who may have wandered in. It is important to address the non-specialists at a congress like this, to help maintain the bonds between the diversifying branches of mathematics. In this particular talk, a restraint on technicalities will have the added advantage of helping experts and the speaker keep their feet on the ground.

The foundations of statistics are controversial, changing, and subtle. Therefore, try though I shall to be fair and clear, you must keep yourselves especially aware that you are hearing mainly the present views of a single person, imperfectly expressed.

The foundations of statistics are a part of the foundations of science in its widest sense. Their study is not mathematics in principle, and by no means all the important contributions to them have been made by mathematicians. But the use of some mathematical techniques is inevitable in the study of a quantitative subject. Still more, mathematical training and outlook have led, and will surely continue to lead, to important advances in the foundations of statistics. The relation is reciprocal in that mathematics is sometimes stimulated by the foundations, as it is by the other theoretical aspects of statistics.

The reference to recent tendencies in my title has a continuum of possible meanings, and in fact various parts of the talk will refer to tendencies of the present century, of the period since World War II, and of the last few years or even months.

2. Meanings of 'statistics'

I begin by outlining some meanings that have been given to the word 'statistics', not to enter into an argument that would be out of place here (and perhaps anywhere) as to what the word ought to mean, but to indicate the subject of this talk and to set the stage for it.

Etymologically, 'statistics' refers to numerical data about the state. Even today there are many professional statisticians to whom 'statistics' means effectively demography in a more or less extensive sense - the compilation and interpretation of census data, economic statistics, or vital statistics (records of births, deaths, and illness).

For many of us, however, the word has drifted far from its original meaning and come to refer to quantitative thinking about uncertainty as it affects scientific and other investigations. It is this meaning, suggested by 'inductive' or 'statistical inference', that 'statistics' has for us here. This subject goes back historically to at least the early eighteenth century, when Jacob Bernoulli, and a little later Thomas Bayes, made great contributions to it. It was pushed forward in the nineteenth century by Laplace, Gauss, and others, and it has been subject to a fervour of activity since the early twenties of this century, when it received great impetus from the work of R A Fisher.

In physics, 'statistics' usually pertains to probability without special reference to the problem of inference but with emphasis on large aggregates.

3. Inductive inference and inductive behaviour

One of the most important trends of the past few decades is a shift in formulation of the central problem of statistics from the problem of what to say or believe in the face of uncertainty to the problem of what to do. It would be hard and unrewarding to seek out the very beginning of this economic movement, as I shall call it because of its emphasis on the costs of wrong decision. It goes back at least as far as Gauss [7], but Neyman brought it forward with particular explicitness in 1938 [8], coining the expression 'inductive behaviour' in contrast to 'inductive inference'. Wald took up the theme with energy and enthusiasm, exploring it in great detail and stimulating many others to do so during his own life and after his untimely death.

That many important and interesting problems concerned with uncertainty are economic in nature is clear and undisputed. Going much further, some of us believe that economic models are of great value throughout the whole of statistics. This is controversial, and it is maintained, especially by Barnard [1],[3] and Fisher [5],[6] that the methods and ideas appropriate to frankly economic problems are not appropriate to the problems of science, the problems of finding out the truth for the truth's sake. Fisher says in a particularly pungent way that science ought not to be confused with the sordid business of either the market place or the five-year planners' bureau [5].

Admittedly a close relation between frankly economic problems and more academic ones is not obvious, or even thoroughly demonstrable, but some case can be made for it. To illustrate, in practical problems of point estimation, there are certain systematic reasons why the penalty for mis-estimation is often nearly proportional to the square of the error. These same reasons are, to say the least, suggestive even for problems of pure science-a precedent for this idea can be seen in Gauss [7]. More generally, it should be kept in mind that science does have goals and that mistakes made in approaching them do entail costs, however subtle and abstract.

There seems to me nothing at the present time to substitute for the hope that an economic theory of decision in the face of uncertainty will be a valuable guide for the whole problem of inference. If there is an important kind of inference problem that cannot properly be discussed in economic terms, no one yet seems able to state these problems with enough precision so that they can be analyzed and solved. In brief, the economic outlook seems to me of great promise for the whole of statistics, though it is not necessarily the last word. We should continue to explore and use it with hope and discretion and with an eye open for new ideas.

One thing that has been said about the putative distinction between scientific and economic problems is that the scientific inference to be drawn from given data is unique and universal, whereas the economic conclusions change with circumstances, such as values and opportunities [3]. I myself believe that the idea of a universal summary of data - that is, the likelihood-ratio function or some effective substitute-is valid and important, but the idea of such a summary does not for me rest on any distinction between science and business.

4. Objectivism and subjectivism

It was for a long time generally believed that all uncertainties could be measured by probabilities, and a few of us today believe that this view, which has recently been very unpopular, must soon again come into its own. It was part of the creed of the great renaissance of statistics in the second quarter of the century that only special uncertainties associated with gambling apparatus and the like were measurable by probabilities and that other uncertainties would have to be analyzed and dealt with in some other ways. This renunciation swept away the classical framework for inference, built on Bayes's theorem, and thereby created many new problems. There was especially the problem of finding new meanings to important-sounding questions that had been rendered nonsensical by the renunciation. The situation was a fertile and stimulating one. Many new ideas directed at filling the gaps were introduced. Some of these ideas are apparently of lasting value, but some of them (such as confidence limits in their current formulation or tests of narrow hypotheses) may not be. In any event, the over-all program has not yet been even nearly successful, nor do I think it ever can be.

Statisticians have always recognized that subjective judgments of fact (as well as of value) necessarily play a role in statistical practice. First, much personal, that is subjective, judgment is obviously required to decide what kind of an experiment is the promising one to perform, and on what scale. There are, therefore, subjective aspects to the essential statistical activity of designing experiments and other investigations. Again, it has long been recognized that the user of statistics, in analyzing data, must make a subjective choice among available operating characteristic curves and the like. To be sure, the minimax theory can be seen as an attempt almost to eliminate all judgments but those of value from both design and analysis, but few if any would contend that there has been more than the formal appearance of success here.

A certain subjective theory of probability formulated by Ramsey [9] and later and more extensively by de Finetti [4] promises great advantages for statistics. Contrary to what the word 'subjective' seems to connote to many, the theory is not mysterious or particularly unoperational. It gives, a few of us believe, a consistent, workable, and unifying analysis for all problems about the interpretation of the theory of probability, a much contested subject. It unifies the treatment of uncertainties, measuring them all by probabilities and emphasizing that they depend not only on patterns of information but on the opinions of individual people. Experience seems to me to show that this theory provides a better framework for understanding both the objective and the subjective aspects of statistics than we have heretofore had.

5. Does It matter?

As is often said, and with much truth, the explicit study of the foundations of a subject is usually of relatively little practical importance, for common sense and experience over the course of time develop a science more securely than it could possibly be built up by direct application of abstract principles. None the less, I believe that present-day discussions about inference and behaviour, about subjectivism and objectivism are stimulating practical advances in statistics. The evidences of this are widely scattered, but I shall mention only two examples.

First, it is becoming increasingly accepted that, once an experiment has been done, any analysis or other reaction to the experiment ought to depend on the likelihood-ratio function and on it alone, without any further regard to how the experiment was actually planned or performed. I believe that this doctrine, which contradicts much that was recently most firmly established in statistical theory and practice, is basically correct and that it will soon greatly simplify and strengthen statistics. Let me not falsify history by intimating that appreciation of the likelihood-ratio function as much more than is ordinarily understood by a 'sufficient statistic' originated in the economic outlook and subjectivism. Actually, it was, so far as I know, begun by Barnard [2] and Fisher [6] , and quite apart from these ideas. None the less, the economic outlook and the subjectivistic theory of probability lend strong support to the likelihood-ratio doctrine and promise to hasten its acceptance and exploitation.

Secondly, David Wallace has recently obtained a valuable new insight into the much vexed Behrens-Fisher problem by reconsidering it from the point of view of subjective probability.


1. G A Barnard, Sequential tests in industrial statistics, J. R. Statist. Soc. (Suppl.) 8 (1946), 1-26.

2. G A Barnard, A review of 'Sequential Analysis' by Abraham Wald, J. Amer. Statist. Ass. 42 (1947), 658-664.

3. G A Barnard, Simplified decision functions, Biometrika 41 (1954), 241-251.

4. Bruno de Finetti, La prévision: ses lois logiques, ses sources subjectives, Ann. lnst. Poinearé 7 (1937), 1-68.

5. Sir Ronald A Fisher, Statistical methods and scientific induction, J. R. Statist. Soc. (B) 17 (1955), 69-78.

6. Sir Ronald A Fisher, Statistical Methods and Scientific Inference (Oliver and Boyd, Edinburgh, 1956).

7. Carl Friedrich Gauss, Abhandlungen zur Methode der kleinsten Quadrate von Carl Friedrich Gauss (Berlin, 1887). (Translation from Latin by A Borsch and P Simon.)

8. Jerzy Neyman, L'estimation statistique, traitée comme un probléme classique de probabilité, in Actualités scientifiques et industrielles no. 739 (Hermann et Cie., Paris, 1938), 25-57.

9. Frank P Ramsey, The Foundations of Mathematics and Other Logical Essays (Kegan Paul, London, 1931).

10. Leonard J Savage, The Foundations of Statistics (John Wiley and Sons, New York, 1954).

JOC/EFR March 2006

The URL of this page is: