A subject trying to guess the outcome of a sequence of draws without replacement from a known finite population may be given complete feedback (i.e. the identity of the object just guessed at) or only partial feedback (i.e. whether or not the guess was correct). Numerous results are established that throw light on the value of such feedback, and the structure of optimal guessing strategies, which can be extremely subtle in the partial-feedback case. ... The authors develop a method for fairly scoring such experiments; the number of correct guesses is compared to the conditional expected number given the available information at each stage. The authors take a new (and hard) look at a number of variations of an old problem, and obtain many exciting results.
Imagine n cards, labeled 1 through n. Card 1 is at the left, Card 2 is next, and so on. Draw two cards at random (with replacement) and change their places; thus no change happens with probability 1/n. This operation is called a random transposition. If many transpositions are made, the row of cards will tend to appear in random arrangement. The following problem is studied: How many transpositions are needed until the permutation is close to random?
There are two ways to perfectly shuffle a deck of 2n cards. Both methods cut the deck in half and interlace perfectly. The out shuffle leaves the original top card on top. The in shuffle leaves the original top card second from the top. Applications to the design of computer networks and card tricks are reviewed. The main result is the determination of the group generated by the two shuffles, for all n.
A review of recent work on exchangeability is offered in the spirit of tolerance and respect for competing views that characterized the work of Jack Kiefer and Jerzy Neyman. The theme is this: If you do a piece of honest work, people will find uses for it no matter what paradigm you work in. This theme is illustrated by tracing the fruits of de Finetti's efforts to understand Hume's problem of induction. His resolution involves exchangeability and his well-known theorem that every exchangeable sequence is a mixture of coin tossing. Subsequent generalizations and an application to the study of human vision are sketched.
The authors consider the possibility of generating computer pictures of some natural objects by plotting the fixed points of products of affine maps. They employ a mathematical framework for the study of fractals proposed by Hutchinson and make interesting connections with stochastic processes and classical problems concerning singular measures.
A convex polyhedron is said to be a fair die if it has the same chance of landing on each of its facets after an accidental throw. A first type of such polytopes is the well-known class of isohedra, which are polytopes with symmetries acting transitively on their facets (fair by symmetry). The authors characterize a second type of fair dice (fair by continuity). The question of further fair dice is still open.
This article illustrates basic statistical techniques for studying coincidences. These include data-gathering methods (informal anecdotes, case studies, observational studies, and experiments) and methods of analysis (exploratory and confirmatory data analysis, special analytic techniques, and probabilistic modeling, both general and special purpose). We develop a version of the birthday problem general enough to include dependence, inhomogeneity, and almost and multiple matches. We review Fisher's techniques for giving partial credit for close matches. We develop a model for studying coincidences involving newly learned words. Once we set aside coincidences having apparent causes, four principles account for large numbers of remaining coincidences: hidden cause; psychology, including memory and perception; multiplicity of endpoints, including the counting of "close'' or nearly alike events as if they were identical; and the law of truly large numbers, which says that when enormous numbers of events and people and their interactions cumulate over time, almost any outrageous event is bound to occur. These sources account for much of the force of synchronicity.
Rarely does a new mathematical result make both the New York Times and the front page of my local paper, and even more rarely is your reviewer asked to speak on commercial radio about a result, but such activity was caused by the preprint of this paper. In layman's terms, it says you should shuffle a deck of cards seven times before playing.
Markov chain Monte Carlo (MCMC) methods are iterative stochastic algorithms for sampling from highly multivariate probability distributions. In this paper the authors show, by means of three significant examples, that statisticians and theoretical computer scientists are developing similar ideas useful for the design of MCMC algorithms and for the study of their convergence properties. This is not a survey paper on MCMC, although some pointers to sources of relevant literature are given. All examples involve reversible Markov chains with discrete sample spaces.
This paper surveys recent progress in the classical subject of Markov chains. Sharp rates of convergence are available for many chains. Examples include shuffling cards, a variety of simulation procedures used in physics and statistical work, and random walk on the chambers of a building. The techniques used are a combination of tools from geometry, PDE, group theory and probability.
Typical large unitary matrices show remarkable patterns in their eigenvalue distribution. These same patterns appear in telephone encryption, the zeros of Riemann's zeta function, a variety of physics problems, and the study of Toeplitz operators. This paper surveys these applications and what is currently known about the patterns.
This paper gives a well-written and interesting overview of one general approach to the study of random walks on groups. It can also serve as an introduction to the subject. Only finite groups are discussed. The methods of the general approach involve character theory and the geometry of the group in various generating sets. The paper contains no major new results. In focus is the example of random transpositions on the symmetric groups, which is a subject the author has made many important contributions to. The paper points out many connections to other branches of mathematics as well as applications to other sciences. Indeed it has a long bibliography. The paper finishes with a list of several open problems. It is therefore altogether a valuable source of overview information.
In a classical paper, Joe Keller (1986) studied the mechanics of a coin, starting heads up, flipped about a horizontal axis with an initial distribution of initial conditions and showed that in many cases, the probability of a heads is close to 1/2. In the paper under review, the authors allow the more realistic possibility of precession and derive an explicit and very interesting formula for the probability of heads in terms of the angle between the normal to the coin and the angular momentum vector. Both the analysis and comparison with experiment reveal a fascinating, but rather small bias that can occur in coin flipping.
The authors study the distribution of the number of fixed points of a random permutation in various families of finite permutation groups.
This article describes the surprising connection between two Markov chains that, on first glance, have no relationship to each other. The first is the "carries process'' discovered and analyzed by J M Holte (1997). The second is the Markov chain that models riffle shuffles of a deck of cards.
The URL of this page is: