Maximum likelihood estimation applies the concept of the likelihood function - an equation that considers the probability of a set of recorded observations given one or more unknown parameters. The maximum value that this function attains when mapped over the unknown parameter is of particular importance, as it indicates the "most likely" value for that parameter.
The following is an example problem that demonstrates a method for finding the maximum likelihood estimate of the scale parameter c in a Pareto distribution.
>
|
|
Generate a sample from the Pareto distribution that will act as the provided data.
>
|
|
Compute the likelihood function of a Pareto distribution with unknown scale parameter, assuming only one sample has been provided.
>
|
|
| (1.1) |
Compute the log likelihood function, assuming four samples have been provided.
>
|
|
| (1.2) |
Compute the score (derivative of the log likelihood function), under the same considerations.
>
|
|
| (1.3) |
Compute the score, instead applying the generated sample data.
>
|
|
| (1.4) |
Solve for c, such that the score is equal to zero.
>
|
|
| (1.5) |
Calculate the maximum likelihood estimate (this should give us the same result).
>
|
|
| (1.6) |
Compute the likelihood ratio statistic.
>
|
|
| (1.7) |
The likelihood ratio statistic is generally distributed as a ChiSquare random variable with 1 degree of freedom. Using this and the fact that the confidence interval bounds the maximum likelihood estimate, we can determine an approximate 95% confidence interval for the maximum likelihood estimate.
>
|
|
| (1.8) |
>
|
|
| (1.9) |