|
|
Line 1: |
Line 1: |
| In [[statistics]], '''Cook's distance''' or '''Cook's ''D''''' is a commonly used estimate of the influence of a data point when performing least squares [[regression analysis]].<ref>{{cite book |last1=Mendenhall |first1=William |last2=Sincich |first2=Terry |title=A Second Course in Statistics: Regression Analysis |edition=5th |year=1996 |publisher=Prentice-Hall |location=Upper Saddle River, NJ |isbn=0-13-396821-9 |page=422 |quote=A measure of overall influence an outlying observation has on the estimated <math>\beta</math> coefficients was proposed by R. D. Cook (1979). Cook's distance, ''D<sub>i</sub>'', is calculated...}}</ref> In a practical [[ordinary least squares]] analysis, Cook's distance can be used in several ways: to indicate data points that are particularly worth checking for validity; to indicate regions of the design space where it would be good to be able to obtain more data points. It is named after the American statistician [[R. Dennis Cook]], who introduced the concept in 1977.<!-- the expression "Cook's Distance" appears at least as early as 1979 -->
| | Nice to meet you, I am Marvella Shryock. One of the issues she enjoys most is to do aerobics and now she is trying to make cash with it. Bookkeeping is her working day occupation now. Puerto Rico is where he and his wife reside.<br><br>Also visit my webpage [http://www.fuguporn.com/user/RMitchell www.fuguporn.com] |
| | |
| ==Definition==
| |
| Cook's distance measures the effect of deleting a given observation. Data points with large residuals ([[outlier]]s) and/or high [[Leverage (statistics)|leverage]] may distort the outcome and accuracy of a regression. Points with a large Cook's distance are considered to merit closer examination in the analysis.
| |
| | |
| :<math>D_i = \frac{ \sum_{j=1}^n (\hat Y_j\ - \hat Y_{j(i)})^2 }{p \ \mathrm{MSE}} .</math>
| |
| | |
| The following are the algebraically equivalent expressions (in case of simple linear regression):
| |
| | |
| :<math>D_i = \frac{e_i^2}{p \ \mathrm{MSE}}\left[\frac{h_{ii}}{(1-h_{ii})^2}\right],</math>
| |
| :<math>D_i = \frac{ (\hat \beta - \hat {\beta}^{(-i)})^T(X^TX)(\hat \beta - \hat {\beta}^{(-i)}) } {(1+p)s^2}.</math>
| |
| | |
| In the above equations:
| |
| :<math>\hat Y_j \,</math> is the prediction from the full regression model for observation ''j'';
| |
| :<math>\hat Y_{j(i)}\,</math> is the prediction for observation ''j'' from a refitted regression model in which observation ''i'' has been omitted;
| |
| :<math>h_{ii} \,</math> is the i-th diagonal element of the [[hat matrix]] <math>\mathbf{X}\left(\mathbf{X}^T\mathbf{X}\right)^{-1}\mathbf{X}^T</math>;
| |
| :<math>e_i \,</math> is the crude residual (i.e., the difference between the observed value and the value fitted by the proposed model);
| |
| :<math> \mathrm{MSE} \,</math> is the [[mean square error]] of the regression model;
| |
| :<math>p</math> is the number of fitted parameters in the model
| |
| | |
| ==Detecting highly influential observations==
| |
| There are different opinions regarding what cut-off values to use for spotting highly influential points. A simple operational guideline of <math>D_i>1</math> has been suggested.<ref>Cook, R. Dennis; and Weisberg, Sanford (1982); ''Residuals and influence in regression'', New York, NY: Chapman & Hall</ref> Others have indicated that <math>D_i>4/n</math>, where <math>n</math> is the number of observations, might be used.<ref>Bollen, Kenneth A.; and Jackman, Robert W. (1990); ''Regression diagnostics: An expository treatment of outliers and influential cases'', in Fox, John; and Long, J. Scott (eds.); ''Modern Methods of Data Analysis'' (pp. 257-91). Newbury Park, CA: Sage</ref>
| |
| | |
| A conservative approach relies on the fact that Cook's distance has the form W/p, where W is formally identical to the [[Wald statistic]] that one uses for testing that <math>H_0:\beta_i=\beta_0</math> using some <math>\hat{\beta}_{[-i]}</math>.{{citation needed|date=December 2011}} Recalling that W/p has an <math>F_{p,n-p}</math> distribution (with p and n-p degrees of freedom), we see that Cook's distance is equivalent to the F statistic for testing this hypothesis, and we can thus use <math>F_{p,n-p, 1-\alpha}</math> as a threshold.
| |
| | |
| ==Interpretation==
| |
| Specifically <math>D_i</math> can be interpreted as the distance one's estimates move within the confidence ellipsoid that represents a region of plausible values for the parameters.{{Clarify|date=July 2010}} This is shown by an alternative but equivalent representation of Cook's distance in terms of changes to the estimates of the regression parameters between the cases where the particular observation is either included or excluded from the regression analysis.
| |
| | |
| ==See also==
| |
| * [[Outlier]]
| |
| * [[Leverage (statistics)]]
| |
| * [[Partial leverage]]
| |
| * [[DFFITS]]
| |
| * [[Studentized residual]]
| |
| | |
| ==References==
| |
| {{reflist}}
| |
| {{refbegin}}
| |
| * {{cite journal
| |
| | last=Cook |first=R. Dennis
| |
| | title=Detection of Influential Observations in Linear Regression
| |
| | journal=Technometrics
| |
| | volume=19 |issue=1 |pages= 15–18
| |
| |date=February 1977
| |
| | publisher=[[American Statistical Association]]
| |
| | mr=0436478
| |
| | doi=10.2307/1268249
| |
| | jstor=1268249
| |
| }}
| |
| * {{cite journal
| |
| | last=Cook |first=R. Dennis
| |
| | title=Influential Observations in Linear Regression
| |
| | journal=Journal of the American Statistical Association
| |
| | volume=74 |issue=365 |pages=169–174
| |
| |date=March 1979
| |
| | publisher=[[American Statistical Association]]
| |
| | mr=0529533
| |
| | doi=10.2307/2286747
| |
| | jstor=2286747
| |
| }}
| |
| * {{cite journal
| |
| | last=Lorenz |first=Frederick O.
| |
| | title=Teaching about Influence in Simple Regression
| |
| | journal=Teaching Sociology
| |
| | volume=15 |issue=2 |pages=173–177
| |
| |date=April 1987
| |
| | publisher=American Sociological Association
| |
| | doi=10.2307/1318032
| |
| | jstor=1318032
| |
| }}
| |
| * {{Cite book
| |
| | last1=Chatterjee |first1=Samprit
| |
| | last2=Hadi |first2=Ali S.
| |
| | title=Regression analysis by example
| |
| | publisher=[[John Wiley and Sons]]
| |
| | edition=4th
| |
| | year=2006
| |
| | isbn=0-471-74696-7
| |
| }}
| |
| {{refend}}
| |
| | |
| [[Category:Regression diagnostics]]
| |
| [[Category:Statistical outliers]]
| |
| [[Category:Statistical distance measures]]
| |
Nice to meet you, I am Marvella Shryock. One of the issues she enjoys most is to do aerobics and now she is trying to make cash with it. Bookkeeping is her working day occupation now. Puerto Rico is where he and his wife reside.
Also visit my webpage www.fuguporn.com