Course Content

# Probability Theory Mastering

1. Additional Statements From The Probability Theory

3. Estimation of Population Parameters

4. Testing of Statistical Hypotheses

Probability Theory Mastering

## Unbiased estimation

We have to understand that since any estimate depends on samples realization, then with each new implementation of the sample, the estimate will be different. Respectively, our estimate is also a random variable with its own distribution and characteristics. That is why we need to introduce certain **quality criterion** in order to understand which estimate is better, which is worse, and how well the estimate matches our expectations.

Usually in practice, the obtained estimate is checked for compliance with three properties such as **unbiased**, **consistent**, and **efficient**.

### Unbiased estimations

In statistics, **unbiased estimation (also called non-biased)** refers to a method of estimating a population parameter such that the expected value of the estimator is equal to the true value of the parameter. In other words, the estimate is not systematically too high or too low on average.
You can see the strict mathematical formulation below:

To better understand this property, let's take a look at estimating the mean and standard deviation for a Gaussian samples using the method of moments. Firstly, let's estimate mean value:

Thus, it can be concluded that estimating the mean of the Gaussian population by calculating the sample mean is unbiased since the expected value of the obtained value is equal to the real value of the mean. Now let's estimate the population's variance by calculating variance via samples:

Thus, we got a biased estimate. The mathematical expectation of the sample variance is less than the real value of the variance. It should be noted that the more samples we have, the closer the mathematical expectation tends to the real value of the variance (because 1/m converges to zero).

The estimate which is biased but this bias converges to zero when the number of samples tends to infinity is called **asymptotically unbiased**. To build a simply unbiased estimate of the population's variance, we can use the following formula:

The estimation obtained using the above formula is called the **adjusted sample variance**.

Note

The sample variance is only biased if we use the sample mean in the calculation. If we know exactly the mathematical expectation and want to estimate the variance, then we can use this expectation instead of the sample mean, and such estimation will be non-biased.

It is also important to mention that the above formulas for estimating the mean and variance can be used for the Gaussian distribution and any distribution with a mathematical expectation and variance. Such estimates will also be non-biased.

Suppose that the momentum estimate of the parameter of a certain distribution equals sample mean. Can we say that this estimate is unbiased?

Select the correct answer

Everything was clear?