Maxwell–Boltzmann statistics: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Khazar2
→‎A derivation of the Maxwell–Boltzmann distribution: clean up, typos fixed: the exact same → exactly the same using AWB (8097)
 
en>Nanite
merge off
Line 1: Line 1:
Name: Willie Diggles<br>My age: 24<br>Country: Germany<br>City: Garstedt <br>Post code: 21441<br>Street: Gruenauer Strasse 15<br><br>my website: [http://hemorrhoidtreatmentfix.com/bleeding-hemorrhoids-treatment bleeding hemorrhoids treatment]
{{distinguish2|[[Gauss–Markov process]]}}
{{Redirect|BLUE|queue management algorithm|Blue (queue management algorithm)}}
{{Regression bar}}
 
In [[statistics]], the '''Gauss–Markov theorem''', named after [[Carl Friedrich Gauss]] and [[Andrey Markov]], states that in a [[linear regression model]] in which the errors have expectation zero and are [[uncorrelated]] and have equal [[variance]]s, the '''best linear [[bias of an estimator|unbiased]] [[estimator]]''' ('''BLUE''') of the coefficients is given by the [[ordinary least squares]] (OLS) estimator.  Here "best" means giving the lowest variance of the estimate, as compared to other unbiased, linear estimates. The errors don't need to be [[normal distribution|normal]], nor do they need to be [[independent and identically distributed]] (only [[uncorrelated]] and [[homoscedastic]]). The hypothesis that the estimator be unbiased cannot be dropped, since otherwise estimators better than OLS exist. See for examples the [[James–Stein estimator]] (which also drops linearity) or [[ridge regression]].
 
== Statement ==
Suppose we have in matrix notation,
:<math> \underline{y} = X \underline{\beta} + \underline{\varepsilon},\quad (\underline{y},\underline{\varepsilon} \in \mathbb{R}^n, \beta \in \mathbb{R}^K \text{ and } X\in\mathbb{R}^{n\times K}) </math>
expanding to,
:<math> y_i=\sum_{j=1}^{K}\beta_j X_{ij}+\varepsilon_i \quad \forall i=1,2,\ldots,n</math>
 
where <math>\beta_j</math> are non-random but '''un'''observable parameters, <math> X_{ij} </math>  are non-random and observable (called the "explanatory variables"), <math>\varepsilon_i</math> are  random, and so <math>y_i</math> are random. The random variables <math>\varepsilon_i</math> are called the "residuals" or "noise" (will be contrasted with "errors" later in the article; see [[errors and residuals in statistics]]). Note that to include a constant in the model above, one can choose to introduce the constant as a variable <math>\beta_{K+1}</math>  with a newly introduced last column of X being unity i.e., <math>X_{i(K+1)} = 1</math> for all <math> i </math>.
 
The '''Gauss–Markov''' assumptions are
 
*<math>E(\varepsilon_i)=0, </math>
*<math>V(\varepsilon_i)= \sigma^2 < \infty,</math>
(i.e., all residuals have the same variance; that is "[[homoscedasticity]]"), and
*<math>{\rm cov}(\varepsilon_i,\varepsilon_j) = 0, \forall i \neq j </math>
 
for <math> i\neq j</math> that is, any the noise terms are drawn from an "uncorrelated" distribution.  A '''linear estimator''' of <math> \beta_j  </math> is a linear combination
 
:<math>\widehat\beta_j = c_{1j}y_1+\cdots+c_{nj}y_n</math>
 
in which the coefficients <math> c_{ij} </math>  are not allowed to depend on the underlying coefficients <math> \beta_j </math>, since those are not observable, but are allowed to depend on the values <math> X_{ij} </math>, since these data are observable.  (The dependence of the coefficients on each <math> X_{ij} </math> is typically nonlinear; the estimator is linear in each <math> y_i </math>  and hence in each random <math> \varepsilon </math>, which is why this is [[linear regression|"linear" regression]].)  The estimator is said to be '''unbiased''' [[if and only if]]
 
:<math>E(\widehat\beta_j)=\beta_j\,</math>
 
regardless of the values of <math> X_{ij} </math>. Now, let <math>\sum_{j=1}^K\lambda_j\beta_j</math> be some linear combination of the coefficients. Then the '''[[mean squared error]]''' of the corresponding estimation is
 
:<math>E \left(\left(\sum_{j=1}^K\lambda_j(\widehat\beta_j-\beta_j)\right)^2\right);</math>
 
i.e., it is the expectation of the square of the weighted sum (across parameters) of the differences between the estimators and the corresponding parameters to be estimated.  (Since we are considering the case in which all the parameter estimates are unbiased, this mean squared error is the same as the variance of the linear combination.) The '''best linear unbiased estimator''' (BLUE) of the vector <math> \beta </math>  of parameters <math> \beta_j </math>  is one with the smallest mean squared error for every vector <math> \lambda </math> of linear combination parameters.  This is equivalent to the condition that
 
:<math>V(\tilde\beta)- V(\widehat\beta)</math>
 
is a positive semi-definite matrix for every other linear unbiased estimator <math>\tilde\beta</math>.
 
The '''ordinary least squares estimator (OLS)''' is the function
 
:<math>\widehat\beta=(X'X)^{-1}X'y</math>
 
of <math> y </math> and <math> X </math> (where <math>X'</math> denotes the [[transpose]] of <math> X </math>)
that minimizes the '''sum of squares of [[errors and residuals in statistics|residuals]]''' (misprediction amounts):
 
:<math>\sum_{i=1}^n\left(y_i-\widehat{y}_i\right)^2=\sum_{i=1}^n\left(y_i-\sum_{j=1}^K\widehat\beta_j X_{ij}\right)^2.</math>
 
The theorem now states that the OLS estimator is a BLUE. The main idea of the proof is that the least-squares estimator is uncorrelated with every linear unbiased estimator of zero, i.e., with every linear combination <math>a_1y_1+\cdots+a_ny_n</math>
whose coefficients do not depend upon the unobservable <math> \beta </math> but whose expected value is always zero.
 
== Proof ==
Let <math> \tilde\beta = Cy </math> be another linear estimator of <math> \beta </math> and let ''C'' be given by <math> (X'X)^{-1}X' + D </math>, where ''D'' is a <math>k \times n</math> nonzero matrix. As we're restricting to ''unbiased'' estimators, minimum mean squared error implies minimum variance. The goal is therefore to show that such an estimator has a variance no smaller than that of <math> \hat\beta </math>, the OLS estimator.
 
The expectation of <math> \tilde\beta </math> is:
:<math>
\begin{align}
E(Cy) &= E(((X'X)^{-1}X' + D)(X\beta + \varepsilon)) \\
&= ((X'X)^{-1}X' + D)X\beta + ((X'X)^{-1}X' + D)\underbrace{E(\varepsilon)}_0 \\
&= (X'X)^{-1}X'X\beta + DX\beta \\
&= (I_k + DX)\beta. \\
\end{align}
</math>
 
Therefore, <math> \tilde\beta </math> is unbiased if and only if <math> DX = 0 </math>.
 
The variance of <math> \tilde\beta </math> is
:<math>
\begin{align}
V(\tilde\beta) &= V(Cy) = CV(y)C' = \sigma^2 CC' \\
&= \sigma^2((X'X)^{-1}X' + D)(X(X'X)^{-1} + D') \\
&= \sigma^2((X'X)^{-1}X'X(X'X)^{-1} + (X'X)^{-1}X'D' + DX(X'X)^{-1} + DD') \\
&= \sigma^2(X'X)^{-1} + \sigma^2(X'X)^{-1} (\underbrace{DX}_{0})' + \sigma^2 \underbrace{DX}_{0} (X'X)^{-1} + \sigma^2DD' \\
&= \underbrace{\sigma^2(X'X)^{-1}}_{V(\hat\beta)} + \sigma^2DD'.
\end{align}
</math>
 
Since ''DD''' is a positive semidefinite matrix, <math> V(\tilde\beta) </math> exceeds <math> V(\hat\beta) </math> by a positive semidefinite matrix.
 
== Generalized least squares estimator ==
The [[generalized least squares]] (GLS) or [[Alexander Aitken|Aitken]] estimator extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrix{{spaced ndash}}the Aitken estimator is also a BLUE.<ref>A. C. Aitken, "On Least Squares and Linear Combinations of Observations", ''Proceedings of the Royal Society of Edinburgh'', 1935, vol. 55, pp. 42–48.</ref>
 
==See also==
*[[Independent and identically distributed random variables]]
*[[Linear regression]]
*[[Measurement uncertainty]]
 
=== Other unbiased statistics ===
*[[Best linear unbiased prediction]] (BLUP)
*[[Minimum-variance unbiased estimator]] (MVUE)
 
==Notes==
<references />
 
==References==
{{refbegin}}
* {{cite journal
|authorlink=R. L. Plackett |last=Plackett |first=R.L.
|year=1950
|title=Some Theorems in Least Squares
|journal=[[Biometrika]]
|volume=37 |issue=1–2 |pages=149–157
|doi=10.1093/biomet/37.1-2.149  |mr=36980 | jstor = 2332158
}}
{{refend}}
 
==External links==
*[http://jeff560.tripod.com/g.html Earliest Known Uses of Some of the Words of Mathematics: G] (brief history and explanation of the name)
*[http://www.xycoon.com/ols1.htm Proof of the Gauss Markov theorem for multiple linear regression] (makes use of matrix algebra)
*[http://emlab.berkeley.edu/GMTheorem/index.html A Proof of the Gauss Markov theorem using geometry]
 
{{Least squares and regression analysis|state=expanded}}
 
{{DEFAULTSORT:Gauss-Markov theorem}}
[[Category:Statistical theorems]]

Revision as of 23:50, 21 January 2014

Template:Distinguish2 Name: Jodi Junker
My age: 32
Country: Netherlands
Home town: Oudkarspel
Post code: 1724 Xg
Street: Waterlelie 22

my page - www.hostgator1centcoupon.info Template:Regression bar

In statistics, the Gauss–Markov theorem, named after Carl Friedrich Gauss and Andrey Markov, states that in a linear regression model in which the errors have expectation zero and are uncorrelated and have equal variances, the best linear unbiased estimator (BLUE) of the coefficients is given by the ordinary least squares (OLS) estimator. Here "best" means giving the lowest variance of the estimate, as compared to other unbiased, linear estimates. The errors don't need to be normal, nor do they need to be independent and identically distributed (only uncorrelated and homoscedastic). The hypothesis that the estimator be unbiased cannot be dropped, since otherwise estimators better than OLS exist. See for examples the James–Stein estimator (which also drops linearity) or ridge regression.

Statement

Suppose we have in matrix notation,

expanding to,

where are non-random but unobservable parameters, are non-random and observable (called the "explanatory variables"), are random, and so are random. The random variables are called the "residuals" or "noise" (will be contrasted with "errors" later in the article; see errors and residuals in statistics). Note that to include a constant in the model above, one can choose to introduce the constant as a variable with a newly introduced last column of X being unity i.e., for all .

The Gauss–Markov assumptions are

(i.e., all residuals have the same variance; that is "homoscedasticity"), and

for that is, any the noise terms are drawn from an "uncorrelated" distribution. A linear estimator of is a linear combination

in which the coefficients are not allowed to depend on the underlying coefficients , since those are not observable, but are allowed to depend on the values , since these data are observable. (The dependence of the coefficients on each is typically nonlinear; the estimator is linear in each and hence in each random , which is why this is "linear" regression.) The estimator is said to be unbiased if and only if

regardless of the values of . Now, let be some linear combination of the coefficients. Then the mean squared error of the corresponding estimation is

i.e., it is the expectation of the square of the weighted sum (across parameters) of the differences between the estimators and the corresponding parameters to be estimated. (Since we are considering the case in which all the parameter estimates are unbiased, this mean squared error is the same as the variance of the linear combination.) The best linear unbiased estimator (BLUE) of the vector of parameters is one with the smallest mean squared error for every vector of linear combination parameters. This is equivalent to the condition that

is a positive semi-definite matrix for every other linear unbiased estimator .

The ordinary least squares estimator (OLS) is the function

of and (where denotes the transpose of ) that minimizes the sum of squares of residuals (misprediction amounts):

The theorem now states that the OLS estimator is a BLUE. The main idea of the proof is that the least-squares estimator is uncorrelated with every linear unbiased estimator of zero, i.e., with every linear combination whose coefficients do not depend upon the unobservable but whose expected value is always zero.

Proof

Let be another linear estimator of and let C be given by , where D is a nonzero matrix. As we're restricting to unbiased estimators, minimum mean squared error implies minimum variance. The goal is therefore to show that such an estimator has a variance no smaller than that of , the OLS estimator.

The expectation of is:

Therefore, is unbiased if and only if .

The variance of is

Since DD' is a positive semidefinite matrix, exceeds by a positive semidefinite matrix.

Generalized least squares estimator

The generalized least squares (GLS) or Aitken estimator extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrixTemplate:Spaced ndashthe Aitken estimator is also a BLUE.[1]

See also

Other unbiased statistics

Notes

  1. A. C. Aitken, "On Least Squares and Linear Combinations of Observations", Proceedings of the Royal Society of Edinburgh, 1935, vol. 55, pp. 42–48.

References

Template:Refbegin

  • One of the biggest reasons investing in a Singapore new launch is an effective things is as a result of it is doable to be lent massive quantities of money at very low interest rates that you should utilize to purchase it. Then, if property values continue to go up, then you'll get a really high return on funding (ROI). Simply make sure you purchase one of the higher properties, reminiscent of the ones at Fernvale the Riverbank or any Singapore landed property Get Earnings by means of Renting

    In its statement, the singapore property listing - website link, government claimed that the majority citizens buying their first residence won't be hurt by the new measures. Some concessions can even be prolonged to chose teams of consumers, similar to married couples with a minimum of one Singaporean partner who are purchasing their second property so long as they intend to promote their first residential property. Lower the LTV limit on housing loans granted by monetary establishments regulated by MAS from 70% to 60% for property purchasers who are individuals with a number of outstanding housing loans on the time of the brand new housing purchase. Singapore Property Measures - 30 August 2010 The most popular seek for the number of bedrooms in Singapore is 4, followed by 2 and three. Lush Acres EC @ Sengkang

    Discover out more about real estate funding in the area, together with info on international funding incentives and property possession. Many Singaporeans have been investing in property across the causeway in recent years, attracted by comparatively low prices. However, those who need to exit their investments quickly are likely to face significant challenges when trying to sell their property – and could finally be stuck with a property they can't sell. Career improvement programmes, in-house valuation, auctions and administrative help, venture advertising and marketing, skilled talks and traisning are continuously planned for the sales associates to help them obtain better outcomes for his or her shoppers while at Knight Frank Singapore. No change Present Rules

    Extending the tax exemption would help. The exemption, which may be as a lot as $2 million per family, covers individuals who negotiate a principal reduction on their existing mortgage, sell their house short (i.e., for lower than the excellent loans), or take part in a foreclosure course of. An extension of theexemption would seem like a common-sense means to assist stabilize the housing market, but the political turmoil around the fiscal-cliff negotiations means widespread sense could not win out. Home Minority Chief Nancy Pelosi (D-Calif.) believes that the mortgage relief provision will be on the table during the grand-cut price talks, in response to communications director Nadeam Elshami. Buying or promoting of blue mild bulbs is unlawful.

    A vendor's stamp duty has been launched on industrial property for the primary time, at rates ranging from 5 per cent to 15 per cent. The Authorities might be trying to reassure the market that they aren't in opposition to foreigners and PRs investing in Singapore's property market. They imposed these measures because of extenuating components available in the market." The sale of new dual-key EC models will even be restricted to multi-generational households only. The models have two separate entrances, permitting grandparents, for example, to dwell separately. The vendor's stamp obligation takes effect right this moment and applies to industrial property and plots which might be offered inside three years of the date of buy. JLL named Best Performing Property Brand for second year running

    The data offered is for normal info purposes only and isn't supposed to be personalised investment or monetary advice. Motley Fool Singapore contributor Stanley Lim would not personal shares in any corporations talked about. Singapore private home costs increased by 1.eight% within the fourth quarter of 2012, up from 0.6% within the earlier quarter. Resale prices of government-built HDB residences which are usually bought by Singaporeans, elevated by 2.5%, quarter on quarter, the quickest acquire in five quarters. And industrial property, prices are actually double the levels of three years ago. No withholding tax in the event you sell your property. All your local information regarding vital HDB policies, condominium launches, land growth, commercial property and more

    There are various methods to go about discovering the precise property. Some local newspapers (together with the Straits Instances ) have categorised property sections and many local property brokers have websites. Now there are some specifics to consider when buying a 'new launch' rental. Intended use of the unit Every sale begins with 10 p.c low cost for finish of season sale; changes to 20 % discount storewide; follows by additional reduction of fiftyand ends with last discount of 70 % or extra. Typically there is even a warehouse sale or transferring out sale with huge mark-down of costs for stock clearance. Deborah Regulation from Expat Realtor shares her property market update, plus prime rental residences and houses at the moment available to lease Esparina EC @ Sengkang

Template:Refend

External links

Template:Least squares and regression analysis