Several Words About Distributions

At all of the pages of the program binomial distribution is used to calculate probabilities. At "Miscellaneous" page normal distribution is used to evaluate your possible poker results. Below you can see basic info about distributions used (source - Wikipedia).

In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of N independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when N = 1, then the binomial distribution is the Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.
A typical example is the following: assume 5% of the population is green-eyed. You pick 500 people randomly. How likely is it that you get 30 or more green-eyed people? The number of green-eyed people you pick is a random variable X which follows a binomial distribution with N = 500 and p = 0.05 (when picking the people with replacement). We are interested in the probability Pr[X >= 30].
In general, if the random variable X follows the binomial distribution with parameters N and p, we write X ~ B(N, p). The probability of getting exactly k successes is given by the probability mass function:

The binomial coefficient "N choose n" (also denoted C(N, n)) gives the name of the distribution. The formula can be understood as follows: we want k successes (probability p^n) and N-n failures (probability (1-p)^(N-n)). However, the n successes can occur anywhere among the N trials, and there are C(N, n) different ways of distributing n successes in a sequence of N trials.


The normal distribution, also called Gaussian distribution (named after Carl Friedrich Gauss, a German mathematician, although Gauss was not the first to work with it), is an extremely important probability distribution in many fields. It is a family of distributions of the same general form, differing in their location and scale parameters: the mean ("average") and standard deviation ("variability"), respectively. The standard normal distribution is the normal distribution with a mean of zero and a standard deviation of one (the green curves in the plots to the right). It is often called the bell curve because the graph of its probability density resembles a bell.
The normal distribution also arises in many areas of statistics: for example, the sampling distribution of the mean is approximately normal, even if the distribution of the population the sample is taken from is not normal. The normal distribution is the most widely used family of distributions in statistics and many statistical tests are based on the assumption of normality. In probability theory, normal distributions arise as the limiting distributions of several continuous and discrete families of distributions.
The cumulative distribution function is defined as the probability that a variable X has a value less than or equal to x, and it is expressed in terms of the density function as:

In practice, one often assumes that data are from an approximately normally distributed population. If that assumption is justified, then about 68% of the values are at within 1 standard deviation away from the mean, about 95% of the values are within two standard deviations and about 99.7% lie within 3 standard deviations. This is known as the "68-95-99.7 rule" or the "Empirical Rule".

Back to index...