In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test. Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine.
Attributes | Values |
---|
rdfs:label
| |
rdfs:comment
| - In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test. Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine. (en)
|
dbp:wikiPageUsesTemplate
| |
Subject
| |
prov:wasDerivedFrom
| |
Wikipage page ID
| |
page length (characters) of wiki page
| |
Wikipage revision ID
| |
has abstract
| - In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test. Statistical tests (such as hypothesis testing) generally require knowledge of the probability distribution of the test statistic. This is often a problem for likelihood ratios, where the probability distribution can be very difficult to determine. A convenient result by Samuel S. Wilks says that as the sample size approaches , the distribution of the test statistic asymptotically approaches the chi-squared distribution under the null hypothesis . Here, denotes the likelihood ratio, and the distribution has degrees of freedom equal to the difference in dimensionality of and , where is the full parameter space and is the subset of the parameter space associated with . This result means that for large samples and a great variety of hypotheses, a practitioner can compute the likelihood ratio for the data and compare to the value corresponding to a desired statistical significance as an approximate statistical test. The theorem no longer applies when any one of the estimated parameters is at its upper or lower limit: Wilks’ theorem assumes that the ‘true’ but unknown values of the estimated parameters lie within the interior of the supported parameter space. The likelihood maximum may no longer have the assumed ellipsoidal shape if the maximum value for the population likelihood function occurs at some boundary-value of one of the parameters, i.e. on an edge of the parameter space. In that event, the likelihood test will still be valid and optimal as guaranteed by the Neyman-Pearson lemma, but the significance (the p-value) can not be reliably estimated using the chi-squared distribution with the number of degrees of freedom prescribed by Wilks. (en)
|
foaf:isPrimaryTopicOf
| |
is Wikipage redirect
of | |
is Link from a Wikipage to another Wikipage
of | |