Thermal de Broglie wavelength: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Khazar2
 
en>Jason from nyc
fill in citation
Line 1: Line 1:
Bryan is actually a celebrity while in the making and the occupation expansion initially second to his 3rd club record, And , is definitely the resistant. He burst to the picture in 2012 regarding his unique combination of downward-house accessibility, video  [http://www.hotelsedinburgh.org luke bryan on tour] celebrity excellent seems and words, is set t in a significant way. The newest album  Top in the land graph and #2 about the put graphs, building it the 2nd top debut during that time of 2004 to get a region musician. <br><br><br><br>The son of your , is aware patience and dedication are key elements with regards to an excellent  job- . His very first album, Keep Me, generated the best  hits “All My Friends Say” and “Country Guy,” when his  hard work, Doin’  Thing, located the artist-a few  [http://lukebryantickets.flicense.com is luke bryan touring in 2014] direct No. 7 single men and women: More Getting in touch with Is often a Fantastic Thing.”<br><br>While in the drop of 2006, Tour: Luke Bryan  And which had an impressive set of , which include Metropolitan. “It’s almost like you are obtaining a   endorsement to travel one stage further, says those artists that have been a part of the Concert toursabove into a greater degree of musicians.” It packaged as among the best  tours in their twenty-season history.<br><br>My web page :: luke bryan luke bryan luke bryan ([http://lukebryantickets.pyhgy.com click through the next website page])
{{More footnotes|date=November 2010}}
 
'''Correction for attenuation''' is a statistical procedure, due to [[Charles Spearman|Spearman]] (1904), to "rid a [[correlation]] coefficient from the weakening effect of measurement error" (Jensen, 1998), a phenomenon also known as [[regression dilution]]. In [[measurement]] and [[statistics]], it is also called '''disattenuation'''. The [[correlation]] between two sets of parameters or measurements is estimated in a manner that accounts for measurement error contained within the [[estimator|estimates]] of those parameters.
 
==Background==
 
Correlations between parameters are diluted or weakened by measurement error. Disattenuation provides for a more accurate estimate of the correlation between the parameters by accounting for this effect.
 
===Derivation of the formula===
 
Let <math>\beta</math> and <math>\theta</math> be the true values of two attributes of some person or [[statistical unit]]. These values are regarded as [[random variables]] by virtue of the statistical unit being selected randomly from some [[statistical population|population]]. Let <math>\hat{\beta}</math> and <math>\hat{\theta}</math> be estimates of  <math>\beta</math> and <math>\theta</math> derived either directly by observation-with-error or from application of a measurement model, such as the [[Rasch model]]. Also, let
 
::<math>
\hat{\beta} = \beta + \epsilon_{\beta} , \quad\quad \hat{\theta} = \theta + \epsilon_\theta,
</math>
 
where <math>\epsilon_{\beta}</math> and <math>\epsilon_\theta</math> are the measurement errors associated with the estimates <math>\hat{\beta}</math> and <math>\hat{\theta}</math>.
 
The correlation between two sets of estimates is
 
:<math>
\operatorname{corr}(\hat{\beta},\hat{\theta})= \frac{\operatorname{cov}(\hat{\beta},\hat{\theta})}{\sqrt{\operatorname{var}[\hat{\beta}]\operatorname{var}[\hat{\theta}}]}
</math>
 
:::::<math>
=\frac{\operatorname{cov}(\beta+\epsilon_{\beta}, \theta+\epsilon_\theta)}{\sqrt{\operatorname{var}[\beta+\epsilon_{\beta}]\operatorname{var}[\theta+\epsilon_\theta]}},
</math>
 
which, assuming the errors are uncorrelated with each other and with the estimates, gives
 
:<math>
\operatorname{corr}(\hat{\beta},\hat{\theta})= \frac{\operatorname{cov}(\beta,\theta)}{\sqrt{(\operatorname{var}[\beta]+\operatorname{var}[\epsilon_\beta])(\operatorname{var}[\theta]+\operatorname{var}[\epsilon_\theta])}}
</math>
 
:::::<math>
=\frac{\operatorname{cov}(\beta,\theta)}{\sqrt{(\operatorname{var}[\beta]\operatorname{var}[\theta])}}.\frac{\sqrt{\operatorname{var}[\beta]\operatorname{var}[\theta]}}{\sqrt{(\operatorname{var}[\beta]+\operatorname{var}[\epsilon_\beta])(\operatorname{var}[\theta]+\operatorname{var}[\epsilon_\theta])}}
</math>
 
:::::<math>
=\rho \sqrt{R_\beta R_\theta},
</math>
 
where <math>R_\beta</math> is the ''separation index'' of the set of estimates of <math>\beta</math>, which is analogous to [[Cronbach's alpha]]; this is, in terms of [[Classical test theory]], <math>R_\beta</math> is analogous to a reliability coefficient. Specifically, the separation index is given as follows:
 
:<math>
R_\beta=\frac{\operatorname{var}[\beta]}{\operatorname{var}[\beta]+\operatorname{var}[\epsilon_\beta]}=\frac{\operatorname{var}[\hat{\beta}]-\operatorname{var}[\epsilon_\beta]}{\operatorname{var}[\hat{\beta}]},
</math>
 
where the mean squared standard error of person estimate gives an estimate of the variance of the errors, <math>\epsilon_\beta</math>. The standard errors are normally produced as a by-product of the estimation process (see [[Rasch model estimation]]).
 
The disattenuated estimate of the correlation between two sets of parameters or measures is therefore
 
:<math>
\rho = \frac{\mbox{corr}(\hat{\beta},\hat{\theta})}{\sqrt{R_\beta R_\theta}}.
</math>
 
That is, the disattenuated correlation is obtained by dividing the correlation between the estimates by the square root of the product of the separation indices of the two sets of estimates. Expressed in terms of Classical test theory, the correlation is divided by the square root of the product of the reliability coefficients of two tests.
 
Given two [[random variable]]s <math>X</math> and <math>Y</math>, with [[correlation]] <math>r_{xy}</math>, and a known [[Reliability (statistics)#Classical test theory|reliability]] for each variable, <math>r_{xx}</math> and <math>r_{yy}</math>, the correlation between <math>X</math> and <math>Y</math> corrected for attenuation is
<math>r_{x'y'} = \frac{r_{xy}}{\sqrt{r_{xx}r_{yy}}}</math>.
 
How well the variables are measured affects the correlation of ''X'' and ''Y''. The correction for attenuation tells you what the correlation would be if you could measure ''X'' and ''Y'' with perfect reliability.
 
If <math>X</math> and <math>Y</math> are taken to be imperfect measurements of underlying variables <math>X'</math> and <math>Y'</math> with independent errors, then <math>r_{x'y'}</math> measures the true correlation between <math>X'</math> and <math>Y'</math>.
 
==See also==
* [[Regression dilution]]
* [[Errors-in-variables model]]
 
== References ==
*Jensen, A.R. (1998).  ''The ''g'' Factor: The Science of Mental Ability''  Praeger, Connecticut, USA. ISBN 0-275-96103-6
*Spearman, C. (1904) "The Proof and Measurement of Association between Two Things". ''The American Journal of Psychology'', 15 (1), 72&ndash;101 {{JSTOR|1412159}}
 
==External links==
*[http://www.rasch.org/rmt/rmt101g.htm Disattenuating correlations]
*[http://pareonline.net/getvn.asp?v=8&n=11 Disattenuation of correlation and regression coefficients: Jason W. Osborne]
 
{{DEFAULTSORT:Correction For Attenuation}}
[[Category:Measurement]]
[[Category:Covariance and correlation]]
[[Category:Psychometrics]]

Revision as of 01:49, 10 December 2013

Template:More footnotes

Correction for attenuation is a statistical procedure, due to Spearman (1904), to "rid a correlation coefficient from the weakening effect of measurement error" (Jensen, 1998), a phenomenon also known as regression dilution. In measurement and statistics, it is also called disattenuation. The correlation between two sets of parameters or measurements is estimated in a manner that accounts for measurement error contained within the estimates of those parameters.

Background

Correlations between parameters are diluted or weakened by measurement error. Disattenuation provides for a more accurate estimate of the correlation between the parameters by accounting for this effect.

Derivation of the formula

Let β and θ be the true values of two attributes of some person or statistical unit. These values are regarded as random variables by virtue of the statistical unit being selected randomly from some population. Let β^ and θ^ be estimates of β and θ derived either directly by observation-with-error or from application of a measurement model, such as the Rasch model. Also, let

β^=β+ϵβ,θ^=θ+ϵθ,

where ϵβ and ϵθ are the measurement errors associated with the estimates β^ and θ^.

The correlation between two sets of estimates is

corr(β^,θ^)=cov(β^,θ^)var[β^]var[θ^]
=cov(β+ϵβ,θ+ϵθ)var[β+ϵβ]var[θ+ϵθ],

which, assuming the errors are uncorrelated with each other and with the estimates, gives

corr(β^,θ^)=cov(β,θ)(var[β]+var[ϵβ])(var[θ]+var[ϵθ])
=cov(β,θ)(var[β]var[θ]).var[β]var[θ](var[β]+var[ϵβ])(var[θ]+var[ϵθ])
=ρRβRθ,

where Rβ is the separation index of the set of estimates of β, which is analogous to Cronbach's alpha; this is, in terms of Classical test theory, Rβ is analogous to a reliability coefficient. Specifically, the separation index is given as follows:

Rβ=var[β]var[β]+var[ϵβ]=var[β^]var[ϵβ]var[β^],

where the mean squared standard error of person estimate gives an estimate of the variance of the errors, ϵβ. The standard errors are normally produced as a by-product of the estimation process (see Rasch model estimation).

The disattenuated estimate of the correlation between two sets of parameters or measures is therefore

ρ=corr(β^,θ^)RβRθ.

That is, the disattenuated correlation is obtained by dividing the correlation between the estimates by the square root of the product of the separation indices of the two sets of estimates. Expressed in terms of Classical test theory, the correlation is divided by the square root of the product of the reliability coefficients of two tests.

Given two random variables X and Y, with correlation rxy, and a known reliability for each variable, rxx and ryy, the correlation between X and Y corrected for attenuation is rxy=rxyrxxryy.

How well the variables are measured affects the correlation of X and Y. The correction for attenuation tells you what the correlation would be if you could measure X and Y with perfect reliability.

If X and Y are taken to be imperfect measurements of underlying variables X and Y with independent errors, then rxy measures the true correlation between X and Y.

See also

References

  • Jensen, A.R. (1998). The g Factor: The Science of Mental Ability Praeger, Connecticut, USA. ISBN 0-275-96103-6
  • Spearman, C. (1904) "The Proof and Measurement of Association between Two Things". The American Journal of Psychology, 15 (1), 72–101 Glazier Alfonzo from Chicoutimi, has lots of interests which include lawn darts, property developers house for sale in singapore singapore and cigar smoking. During the last year has made a journey to Cultural Landscape and Archaeological Remains of the Bamiyan Valley.

External links