Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Efficient Estimation | Estimation of Population Parameters
course content

Course Content

Probability Theory Mastering

Efficient EstimationEfficient Estimation

Efficient estimators are estimations which achieve the smallest possible variance among all unbiased estimators. In other words, an efficient estimator is unbiased and has the smallest possible standard error among all unbiased estimators. Formally it can be described as follows:

Note

An efficient estimator is always unique, i.e. there are no two estimators that could be simultaneously efficient.

Why do we need efficient estimations, and what is the difference between consistent and efficient estimations?

  1. For consistent estimates, the variance tends to zero when using a large number of samples. At the same time, in real problems, the number of samples is limited, and we need to compare the estimates' variance for a specific number of samples. For this, we need to determine whether the estimate is effective;
  2. Even if we use many samples, there is such a thing as the rate of convergence. In simple words, the rate of convergence determines the minimum number of samples at which the estimate is already very close to the real parameter. If we need to compare two consistent estimates, preference should always be given to the one that initially has a smaller variance.

Criterion of efficiency

As in the case of consistent assessments, it is sometimes difficult to check the effectiveness by definition. That is why we will consider the criterion for the effectiveness of the estimate:

Sample mean estimation

Let's prove that the sample mean and variance with known expectation are efficient estimates for Gaussian distribution parameters. Firstly, let's construct a logarithmic likelihood function for Gaussian distribution:

Let's now take the partial derivative of the log-likelihood with respect to the parameter mu:

According to the efficiency criteria, we see that the sample mean is indeed an efficient estimate of the parameter mu.

Sample variance estimation

Let us now define the effective estimator for the variance of the Gaussian distribution in the same way:

Although we see from the result that the sample variance with a known mathematical expectation is an efficient estimator. It is worth recalling that we have already considered that the sample variance with a known mathematical expectation is a non-biased estimator, so we can consider it an effective estimator.
However, in practice, we rarely know the real value of the expected value. Therefore, it is best to use the adjusted sample variance as an estimate since it is non-biased and consistent, although it is not efficient.

Everything was clear?

Section 3. Chapter 7
course content

Course Content

Probability Theory Mastering

Efficient EstimationEfficient Estimation

Efficient estimators are estimations which achieve the smallest possible variance among all unbiased estimators. In other words, an efficient estimator is unbiased and has the smallest possible standard error among all unbiased estimators. Formally it can be described as follows:

Note

An efficient estimator is always unique, i.e. there are no two estimators that could be simultaneously efficient.

Why do we need efficient estimations, and what is the difference between consistent and efficient estimations?

  1. For consistent estimates, the variance tends to zero when using a large number of samples. At the same time, in real problems, the number of samples is limited, and we need to compare the estimates' variance for a specific number of samples. For this, we need to determine whether the estimate is effective;
  2. Even if we use many samples, there is such a thing as the rate of convergence. In simple words, the rate of convergence determines the minimum number of samples at which the estimate is already very close to the real parameter. If we need to compare two consistent estimates, preference should always be given to the one that initially has a smaller variance.

Criterion of efficiency

As in the case of consistent assessments, it is sometimes difficult to check the effectiveness by definition. That is why we will consider the criterion for the effectiveness of the estimate:

Sample mean estimation

Let's prove that the sample mean and variance with known expectation are efficient estimates for Gaussian distribution parameters. Firstly, let's construct a logarithmic likelihood function for Gaussian distribution:

Let's now take the partial derivative of the log-likelihood with respect to the parameter mu:

According to the efficiency criteria, we see that the sample mean is indeed an efficient estimate of the parameter mu.

Sample variance estimation

Let us now define the effective estimator for the variance of the Gaussian distribution in the same way:

Although we see from the result that the sample variance with a known mathematical expectation is an efficient estimator. It is worth recalling that we have already considered that the sample variance with a known mathematical expectation is a non-biased estimator, so we can consider it an effective estimator.
However, in practice, we rarely know the real value of the expected value. Therefore, it is best to use the adjusted sample variance as an estimate since it is non-biased and consistent, although it is not efficient.

Everything was clear?

Section 3. Chapter 7
some-alt