Derivation of the Normal distribution

The normal distribution, or normal density, is well known as the bell shaped curve. It has wide use in probability and statistics. In standard textbooks its definition is given as a two-parameter family of curves: \[ f(z)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\tfrac{1}{2}\left(\dfrac{z-\mu}{\sigma}\right)^{\Large 2}} \] the parameter $\mu$ determines the location of its peak and the parameter $\sigma$ the fatness of the distribution. The standard normal distribution has $\mu=0$ and $\sigma=1$.

The first occurence of the normal distribution appeared in an article [5] in 1733 by Abraham de Moivre as an approximation to the Binomial distribution. We show de Moivres derivation in modern terminology. Suppose that the discrete random variable $X_n$ has a binomial distribution $X \sim B(n,p)$ with parameters $n=1,2,3,..$ and $0\less p\less1$ then $X$ has following probability mass function (pmf): \[ f(x)=\binom{n}{x}p^k(1-p)^{n-x} \] A discrete random variable has a binomial distribution if:

When we draw the histogram of a binomial distribution we see that the distribution increase from the left to a maximum value, and then decreases. For increasing values of $n$ we see that the maximum value is lower, but shifts to the right. The value of $x$ which corresponds with the peak is the expected value.

Friederich Gauss derived the standard normal distribution as the probability distribution for random errors.


Copyright 2013 Jacq Krol. All rights reserved. Created ; last updated .