|
|
Line 1: |
Line 1: |
| In [[statistics]] and [[signal processing]], a '''minimum mean square error''' ('''MMSE''') estimator is an estimation method which minimizes the [[mean square error]] (MSE) of the fitted values of a [[dependent variable]], which is a common measure of estimator quality.
| | A condominium means a place where particular person units are offered. You've got particular person possession of all issues ranging from walls to everything. It's a must to share some property with the other house owners like elevators, club home, swimming pool, Fitness center, Tennis courts and other common usable properties.<br><br>As the vast majority of condominium new launch buyers will take up the maximum loan quantum that they will get, you will need to additionally bear in mind that the mortgage dimension you will be eligible for will more than likely be your purchase price much less the reductions, stamp responsibility rebates, money back and vouchers that shall be given to you. Which means that if you're requesting a eighty% mortgage for a $1m property, you will not necessarily be capable to get a $800k mortgage. Only for example, $50k value of goodies may be taken off the $1m shopping for value. That means it is possible for you to to mortgage 80% of $950k as a substitute of $1m. That is distinctive to BUC properties. Getting eighty% can be not an entitlement. You'll be topic to credit score assessment and income stress exams when evaluation your utility.<br><br>An up and coming condominium that may impress with it ‘s magnificence and high quality that it possess, Waterfront @ Faber is a brand new launch situated at Faber Walk. Waterfront @ Faber is also conveniently positioned close to to Clementi Bus Interchange in addition to Clementi MRT Station. With these mode of transport, it is possible for you to to venture to all the popular locations in Singapore with ease resembling Jurong Level, The Star Vista, Orchard Street and many more. There are additionally loads of food choices from restaurants, quick-meals to local delights at some of Singapore's most well-known hawker.<br><br>Leveraging on authorities knowledge and industry skilled opinions, the eBook goals to supply a whole and comprehensive market view. Topical themes discovered within the eBook embody altering property sentiment and the effect of cooling measures on transaction volumes in 2013. Kallang Riverside Rental is a newluxurious residential condominium positioned in the prime Kallang Street, SingaporeDistrict 12. Set among the many breathtaking view of the surroundings added thestriking view of Kallang River, this outstanding development is a relaxed andpeaceful community but handy to every thing you want. Who would not like to personal a Chicago apartment ? And now with our companies it couldn't be easier. We now have 1000's of Chicago condos listed on our web site. Visit at present or name us<br><br>Situated in the Jurong Lake District and close to the Chinese Backyard, the event sits on a 240,654 sq ft website that was acquired by MCL Land in January 2013 for $439 million, or $651 psf per plot ratio. Furthermore, resale prices at Keppel Land's Lakefront Residences close to Lakeside MRT station have stood at $1,121 psf on average during the last two years. The project was launched in 2010 at a median worth of $1,020 psf. Developed by MCC Land, the 597-unit condominium comprises one- to 5-bed room models of varying configurations, together with penthouses. On account of strong demand, the developers extended the sales hours on Thursday and launched an additional 100 units, revealed Betsy Chng, Head of Sales and Advertising and marketing at Hong Leong Holdings. GOOD Rental Potential<br><br>Singapore's Master Plan 2013 focuses on bringing a change to the varied townships by making them greener and more well-connected, and by forging stronger communities. The Grasp Plan makes this [http://inservio.org/content/hdb-launches-3957-bto-flats http://inservio.org/] potential by bringing quality jobs nearer to people's residence, expanding the business sector, offering a better dwelling setting, as well as bettering the transport system and accessibility throughout town. These have been among the salient factors discussed within the Master Plan on the event of Singapore over the following 5 to 10 years. |
| | |
| The term MMSE more specifically refers to estimation in a [[Bayesian estimator|Bayesian]] setting with quadratic cost function. The basic idea behind the Bayesian approach to estimation stems from practical situations where we often have some prior information about the parameter to be estimated. For instance, we may have prior information about the range that the parameter can assume; or we may have an old estimate of the parameter that we want to modify when a new observation is made available. This is in contrast to the non-Bayesian approach like [[minimum-variance unbiased estimator]] (MVUE) where absolutely nothing is assumed to be known about the parameter in advance and which does not account for such situations. In the Bayesian approach, such prior information is captured by the prior probability density function of the parameters; and based directly on [[Bayes theorem]], it allows us to make better posterior estimates as more observations become available. Thus unlike non-Bayesian approach where parameters of interest are assumed to be deterministic, but unknown constants, the Bayesian estimator seeks to estimate a parameter that is itself a random variable. Furthermore, Bayesian estimation can also deal with situations where the sequence of observations are not necessarily independent. Thus Bayesian estimation provides yet another alternative to the MVUE. This is useful when the MVUE does not exist or cannot be found.
| |
| | |
| ==Definition==
| |
| Let <math>x</math> be a <math>n \times 1</math> unknown (hidden) random vector variable, and let <math>y</math> be a <math>m \times 1</math> known random vector variable (the measurement or observation), both of them not necessarily of the same dimension. An [[estimator]] <math>\hat{x}(y)</math> of <math>x</math> is any function of the measurement <math>y</math>. The estimation error vector is given by <math>e = \hat{x} - x</math> and its [[mean squared error]] (MSE) is given by the [[trace (linear algebra)|trace]] of error [[covariance matrix]]
| |
| :<math>\mathrm{MSE} = \mathrm{tr} \left\{ E\{(\hat{x} - x)(\hat{x} - x)^T \}\right\}</math>,
| |
| where the [[expected value|expectation]] is taken over both <math>x</math> and <math>y</math>. When <math>x</math> is a scalar variable, then MSE expression simplifies to <math>E \left\{ (\hat{x} - x)^2 \right\}</math>. Note that MSE can equivalently be defined in other ways, since
| |
| :<math>\mathrm{tr} \left\{ E\{ee^T \} \right\} = E \left\{ \mathrm{tr}\{ee^T \} \right\} = E\{e^T e \} = \sum_{i=1}^n E\{e_i^2\}.</math>
| |
| | |
| The MMSE estimator is then defined as the estimator achieving minimal MSE.
| |
| | |
| ==Properties==
| |
| * Under some weak regularity assumptions,<ref>Lehmann and Casella, Corollary 4.1.2.</ref> the MMSE estimator is uniquely defined, and is given by
| |
| ::<math>\hat{x}_{\mathrm{MMSE}}(y) = E \left\{x | y \right\}.</math>
| |
| :In other words, the MMSE estimator is the conditional expectation of <math>x</math> given the known observed value of the measurements.
| |
| * The MMSE estimator is unbiased
| |
| ::<math>E\{\hat{x}_{\mathrm{MMSE}}(y)\} = E \left\{ E\{x|y\} \right\} = E\{x\}.</math>
| |
| * The [[orthogonality principle]]: When <math>x</math> is a scalar, an estimator constrained to be of certain form <math>\hat{x}=g(y)</math> is an optimal estimator, i.e. <math>\hat{x}_{\mathrm{MMSE}}=g^*(y),</math> if and only if
| |
| ::<math>E \{ (\hat{x}_{\mathrm{MMSE}}-x) g(y) \} = 0</math>
| |
| :for all <math>g(y)</math> in closed, linear subspace <math>\mathcal{V} = \{g(y)| g:\mathbb{R}^m \rightarrow \mathbb{R}, E\{g(y)^2\} < + \infty \}</math> of the measurements. For random vectors, since the MSE for estimation of a random vector is the sum of the MSEs of the coordinates, finding the MMSE estimator of a random vector decomposes into finding the MMSE estimators of the coordinates of X separately:
| |
| ::<math>E \{ (g_i^*(y)-x_i) g_j(y) \} = 0,</math>
| |
| :for all ''i'' and ''j''. More succinctly put,
| |
| ::<math>E \{ (\hat{x}_{\mathrm{MMSE}}-x)\hat{x}^T \} = 0.</math>
| |
| * If <math>x</math> and <math>y</math> are [[jointly Gaussian]], then the MMSE estimator is linear, i.e., it has the form <math>Wy+b</math> for matrix <math>W</math> and constant <math>b</math>. As a consequence, to find the MMSE estimator, it is sufficient to find the linear MMSE estimator.
| |
| | |
| ==Linear MMSE estimator==
| |
| In many cases, it is not possible to determine a closed form expression for the conditional expectation <math>E\{x|y\}</math> required to obtain the MMSE estimator. Direct numerical evaluation of the conditional expectation is computationally expensive, since they often require multidimensional integration usually done using [[Monte Carlo methods]]. In such cases, one possibility is to abandon the full optimality requirements and seek a technique minimizing the MSE within a particular class of estimators, such as the class of linear estimators. Thus we postulate that the conditional expectation of <math>x</math> given <math>y</math> is a simple linear function of <math>y</math>, <math>E\{x|y\} = W y + b</math>, where the measurement <math>y</math> is a random vector, <math>W</math> is a matrix and <math>b</math> is a vector. The linear MMSE estimator is the estimator achieving minimum MSE among all estimators of such form. Such linear estimator only depends on the first two moments of the probability density function. So although it may be convenient to assume that <math>x</math> and <math>y</math> are jointly Gaussian, it is not necessary to make this assumption, so long as the assumed distribution has well defined first and second moments.
| |
| | |
| The expression for optimal <math>b</math> and <math>W</math> is given by
| |
| :<math>b = \bar{x} - W \bar{y},</math>
| |
| :<math> W = C_{XY}C^{-1}_{Y}.</math>
| |
| | |
| Thus the expression for linear MMSE estimator, its mean, and its auto-covariance is given by
| |
| :<math>\hat{x} = W(y-\bar{y}) + \bar{x},</math>
| |
| :<math>E\{\hat{x}\} = \bar{x},</math>
| |
| :<math>C_{\hat{X}} = C_{XY} C^{-1}_Y C_{YX},</math>
| |
| | |
| where <math>\bar{x} = E\{x\}</math>, <math>\bar{y} = E\{y\},</math> the <math>C_{XY}</math> is cross-covariance matrix between <math>x</math> and <math>y</math>, the <math>C_{Y}</math> is auto-covariance matrix of <math>y</math>, and the <math>C_{YX}</math> is cross-covariance matrix between <math>y</math> and <math>x</math>. Lastly, the error covariance and minimum mean square error achievable by such estimator is
| |
| | |
| :<math>C_e = C_X - C_{\hat{X}} = C_X - C_{XY} C^{-1}_Y C_{YX},</math>
| |
| :<math>\mathrm{LMMSE} = \mathrm{tr} \{C_e\}.</math>
| |
| | |
| For the special case when both <math>x</math> and <math>y</math> are scalars, the above relations simplify to
| |
| | |
| :<math>\hat{x} = \frac{\sigma_{XY}}{\sigma_X^2}(y-\bar{y}) + \bar{x},</math>
| |
| :<math>\sigma^2_e = \sigma_X^2 - \frac{\sigma_{XY}^2}{\sigma_X^2}.</math>
| |
| | |
| {{Collapse top|title=Derivation using orthogonality principle}}
| |
| | |
| Let us have the optimal linear MMSE estimator given as <math>\hat{x} = Wy+b</math>, where we are required to find the expression for <math>W</math> and <math>b</math>. It is required that the MMSE estimator be unbiased. This means,
| |
| | |
| :<math>E\{\hat{x}\} = E\{x\}.</math>
| |
| | |
| Plugging the expression for <math>\hat{x}</math> in above, we get
| |
| | |
| :<math>b = \bar{x} - W \bar{y},</math>
| |
| | |
| where <math>\bar{x} = E\{x\}</math> and <math>\bar{y} = E\{y\}</math>. Thus we can re-write the estimator as
| |
| | |
| :<math>\hat{x} = W(y-\bar{y}) + \bar{x}</math>
| |
| | |
| and the expression for estimation error becomes
| |
| | |
| :<math>\hat{x} - x = W(y-\bar{y}) - (x-\bar{x}).</math>
| |
| | |
| From the orthogonality principle, we can have <math>E \{ (\hat{x}-x) (y-\bar{y})^T\} = 0</math>, where we take <math>g(y) = y - \bar{y}</math>. Here the left hand side term is
| |
| | |
| :<math>
| |
| \begin{align}
| |
| E \{ (\hat{x}-x)(y - \bar{y})^T\} &= E \{ (W(y-\bar{y}) - (x-\bar{x})) (y - \bar{y})^T \} \\
| |
| &= W E \{(y-\bar{y})(y-\bar{y})^T \} - E \{ (x-\bar{x})(y-\bar{y})^T \} \\
| |
| &= WC_{Y} - C_{XY}.
| |
| \end{align}
| |
| </math>
| |
| | |
| When equated to zero, we obtain the desired expression for <math>W</math> as
| |
| | |
| :<math> W = C_{XY}C^{-1}_{Y} .</math>
| |
| | |
| The <math>C_{XY}</math> is cross-covariance matrix between X and Y, and <math>C_{Y}</math> is auto-covariance matrix of Y. Since <math>C_{XY}=C^T_{YX}</math>, the expression can also be re-written in terms of <math>C_{YX}</math> as
| |
| | |
| :<math>W^T = C^{-1}_{Y}C_{YX} .</math>
| |
| | |
| Thus the full expression for the linear MMSE estimator is
| |
| | |
| :<math>\hat{x} = C_{XY}C^{-1}_{Y}(y-\bar{y}) + \bar{x}.</math>
| |
| | |
| Since the estimate <math>\hat{x}</math> is itself a random variable with <math>E\{\hat{x}\} = \bar{x}</math>, we can also obtain its auto-covariance as
| |
| | |
| :<math>
| |
| \begin{align}
| |
| C_{\hat{X}} &= E\{(\hat x - \bar x)(\hat x - \bar x)^T\} \\
| |
| &= W E\{(y-\bar{y})(y-\bar{y})^T\} W^T \\
| |
| &= W C_Y W^T .\\
| |
| \end{align}
| |
| </math>
| |
| | |
| Putting the expression for <math>W</math> and <math>W^T</math>, we get
| |
| | |
| :<math>C_{\hat{X}} = C_{XY} C^{-1}_Y C_{YX}.</math>
| |
| | |
| Lastly, the covariance of linear MMSE estimation error will then be given by
| |
| | |
| :<math>
| |
| \begin{align}
| |
| C_e &= E\{(\hat x - x)(\hat x - x)^T\} \\
| |
| &= E\{(\hat x - x)(W(y-\bar{y}) - (x-\bar{x}))^T\} \\
| |
| &= \underbrace{E\{(\hat x - x)(y-\bar{y})^T \}}_0 W^T - E\{(\hat x - x)(x-\bar{x})^T\} \\
| |
| &= - E\{(W(y-\bar{y}) - (x-\bar{x}))(x-\bar{x})^T\} \\
| |
| &= E\{(x-\bar{x})(x-\bar{x})^T\} - W E\{(y-\bar{y})(x-\bar{x})^T\} \\
| |
| &= C_X - WC_{YX} .\\
| |
| \end{align}
| |
| </math>
| |
| | |
| The first term in the third line is zero due to the orthogonality principle. Since <math>W = C_{XY}C^{-1}_Y</math>, we can re-write <math>C_e</math> in terms of covariance matrices as
| |
| | |
| :<math>C_e = C_X - C_{XY} C^{-1}_Y C_{YX} .</math>
| |
| | |
| This we can recognize to be the same as <math>C_e = C_X - C_{\hat{X}}.</math> Thus the minimum mean square error achievable by such a linear estimator is
| |
| | |
| :<math>\mathrm{LMMSE} = \mathrm{tr}\{C_e\} </math>.
| |
| | |
| {{Collapse bottom}}
| |
| | |
| Standard method like [[Gauss elimination]] can be used to solve the matrix equation for <math>W</math>. A more numerically stable method is provided by [[QR decomposition]] method. Since the matrix <math>C_Y</math> is a symmetric positive definite matrix, <math>W</math> can be solved twice as fast with the [[Cholesky decomposition]], while for large sparse systems [[conjugate gradient method]] is more effective. [[Levinson recursion]] is a fast method when <math>C_Y</math> is also a [[Toeplitz matrix]]. This can happen when <math>y</math> is a [[wide sense stationary]] process. In such stationary cases, these estimators are also referred to as [[Wiener filter|Wiener-Kolmogorov filter]]s.
| |
| | |
| ==Linear MMSE estimator for linear observation process==
| |
| Let us further model the underlying process of observation as a linear process: <math>y=Ax+z</math>, where <math>A</math> is a known matrix and <math>z</math> is random noise vector with the mean <math>E\{z\}=0</math> and cross-covariance <math>C_{XZ} = 0</math>. Here the required mean and the covariance matrices will be
| |
| | |
| :<math>E\{y\} = A\bar{x},</math>
| |
| :<math>C_Y = AC_XA^T + C_Z,</math>
| |
| :<math>C_{XY} = C_X A^T .</math>
| |
| | |
| Thus the expression for the linear MMSE estimator matrix <math>W</math> further modifies to
| |
| | |
| :<math>W = C_X A^T(AC_XA^T + C_Z)^{-1} .</math>
| |
| | |
| Putting everything into the expression for <math>\hat{x}</math>, we get
| |
| | |
| :<math>\hat{x} = C_X A^T(AC_XA^T + C_Z)^{-1}(y-A\bar{x}) + \bar{x}.</math>
| |
| | |
| Lastly, the error covariance is
| |
| | |
| :<math>C_e = C_X - C_{\hat{X}} = C_X - C_X A^T(AC_XA^T + C_Z)^{-1}AC_X .</math>
| |
| | |
| The significant difference between the estimation problem treated above and those of [[least squares]] and [[Gauss-Markov theorem|Gauss-Markov]] estimate is that the number of observations ''m'', (i.e. the dimension of <math>y</math>) need not be at least as large as the number of unknowns, ''n'', (i.e. the dimension of <math>x</math>). The estimate for the linear observation process exists so long as the ''m''-by-''m'' matrix <math>(AC_XA^T + C_Z)^{-1}</math> exists; this is the case for any ''m'' if, for instance, <math>C_Z</math> is positive definite. Physically the reason for this property is that since <math>x</math> is now a random variable, it is possible to form a meaningful estimate (namely its mean) even with no measurements. Every new measurement simply provides additional information which may modify our original estimate. Another feature of this estimate is that for ''m'' < ''n'', there need be no measurement error. Thus, we may have <math>C_Z = 0</math>, because as long as <math>AC_XA^T</math> is positive definite, the estimate still exists. Lastly, this technique can handle cases where the noise is correlated, or in other words, when the noise is non-white.
| |
| | |
| ===Alternative form===
| |
| An alternative form of expression can be obtained by using the matrix identity
| |
| :<math>C_X A^T(AC_XA^T + C_Z)^{-1} = (A^TC_Z^{-1}A + C_X^{-1})^{-1} A^T C_Z^{-1},</math>
| |
| which can be established by post-multiplying by <math>(AC_XA^T + C_Z)</math> and pre-multiplying by <math>(A^TC_Z^{-1}A + C_X^{-1}),</math> to obtain
| |
| | |
| :<math>W = (A^TC_Z^{-1}A + C_X^{-1})^{-1} A^TC_Z^{-1},</math>
| |
| and
| |
| :<math>C_e = (A^TC_Z^{-1}A + C_X^{-1})^{-1}.</math>
| |
| | |
| Since <math>W</math> can now be written in terms of <math>C_e</math> as <math>W = C_e A^T C_Z^{-1}</math>, we get a simplified expression for <math>\hat{x}</math> as
| |
| | |
| :<math>\hat{x} = C_e A^T C_Z^{-1}(y-A\bar{x}) + \bar{x}.</math>
| |
| | |
| In this form the above expression can be easily compared with [[Least squares#Weighted least squares |weighed least square]] and [[Gauss-Markov theorem|Gauss-Markov estimate]]. In particular, when <math>C_X^{-1}=0</math>, corresponding to infinite variance of the apriori information concerning <math>x</math>, the result <math>W = (A^TC_Z^{-1}A)^{-1} A^TC_Z^{-1}</math> is identical to the weighed linear least square estimate with <math>C_Z^{-1}</math> as the weight matrix. Moreover, if the components of <math>z</math> are uncorrelated such that <math>C_Z = \sigma^2 I,</math> where <math>I</math> is an identity matrix, then <math>W = (A^TA)^{-1}A^T.</math> which has the same expression as the ordinary least square estimate.
| |
| | |
| ==Sequential linear MMSE estimation==
| |
| ===For stationary process===
| |
| In many real-time application, observational data is not available in a single batch. Instead the observations are made in a sequence. A naive application of previous formulas would have us discard an old estimate and recompute a new estimate as fresh data is made available. But then we lose all information provided by the old observation. When the observations are scalar quantities, one possible way of avoiding such re-computation is to first concatenate the entire sequence of observations and then apply the standard estimation formula as done in Example 2. But this can be very tedious because as the number of observation increases so does the size of the matrices that need to be inverted and multiplied grow. Also, this method is difficult to extend to the case of vector observations. Another approach to estimation from sequential observations is to simply update an old estimate as additional data becomes available, leading to finer estimates. Thus a recursive method is desired where the new measurements can modify the old estimates. Implicit in these discussions is the assumption that the statistical properties of <math>x</math> does not change with time. In other words, <math>x</math> is stationary.
| |
| | |
| For sequential estimation, if we have an estimate <math>\hat{x}_1</math> based on measurements generating space <math>Y_1</math>, then after receiving another set of measurements, we should subtract out from these measurements that part that could be anticipated from the result of the first measurements. In other words, the updating must be based on that part of the new data which is orthogonal to the old data.
| |
| | |
| Suppose an optimal estimate <math>\hat{x}_1</math> has been formed on the basis of past measurements and that error covariance matrix is <math>C_{e_1}</math>. For linear observation processes the best estimate of <math>y</math> based on past observation, and hence old estimate <math>\hat{x}_1</math>, is <math>\hat{y} = A\hat{x}_1</math> and thus <math>\tilde{y} = y - \hat{y} = y - A\hat{x}_1</math>. The new estimate based on additional data is now
| |
| :<math>\hat{x}_2 = \hat{x}_1 + C_{X\tilde{Y}}C_{\tilde{Y}}^{-1} \tilde{y},</math>
| |
| where <math>C_{X\tilde{Y}}</math> is the cross-covariance between <math>x</math> and <math>\tilde{y}</math> and <math>C_{\tilde{Y}}</math> is the auto-covariance of <math>\tilde{y}.</math> This works out to be
| |
| :<math>\hat{x}_2 = \hat{x}_1 + C_{e_1} A^T(AC_{e_1}A^T + C_Z)^{-1}(y-A\hat{x}_1),</math>
| |
| and the new error covariance is
| |
| :<math>C_{e_2} = C_{e_1} - C_{e_1}A^T(AC_{e_1}A^T + C_Z)^{-1}AC_{e_1} .</math>
| |
| | |
| The repeated use of the above two equations as more observations become available lead to recursive estimation techniques. The expressions can be more compactly written as
| |
| :#<math>K_2 = C_{e_1} A^T(AC_{e_1}A^T + C_Z)^{-1},</math>
| |
| :#<math>\hat{x}_2 = \hat{x}_1 + K_2(y-A\hat{x}_1),</math>
| |
| :#<math>C_{e_2} = (I - K_2A)C_{e_1}.</math>
| |
| | |
| The matrix <math>K</math> is often referred to as the gain factor. The repetition of these three steps as more data becomes available leads to an iterative estimation algorithm.
| |
| | |
| For example, an easy to use recursive expression can be derived when at each ''m''-th time instant the underlying linear observation process yields a scalar such that <math>y_m = a_m^T x + z_m</math>, where <math>a_m^T</math> is 1-by-''n'' known row vector whose values can change with time, <math>x</math> is ''n''-by-1 random column vector to be estimated, and <math>z_m</math> is scalar noise term with variance <math>\sigma_m^2</math>. After (''m''+1)-th observation, the direct use of above recursive equations give the expression for the estimate <math>\hat{x}_{m+1}</math> as:
| |
| :<math>\hat{x}_{m+1} = \hat{x}_m + k_{m+1}(y_{m+1} - a^T_{m+1} \hat{x}_m)</math>
| |
| | |
| where <math>y_{m+1}</math> is the new scalar observation and the gain factor <math>k_{m+1}</math> is ''n''-by-1 column vector given by
| |
| :<math>k_{m+1} = \frac{(C_e)_m a_{m+1}}{\sigma^2_{m+1} + a^T_{m+1}(C_e)_m a_{m+1}}.</math>
| |
| The <math>(C_e)_{m+1}</math> is ''n''-by-''n'' error covariance matrix given by
| |
| :<math>(C_e)_{m+1} = (I - k_{m+1}a^T_{m+1})(C_e)_m .</math>
| |
| | |
| Here no matrix inversion is required. Also the gain factor <math>k_{m+1}</math> depends on our confidence in the new data sample, as measured by the noise variance, versus that in the previous data. The initial values of <math>\hat{x}</math> and <math>C_e</math> are taken to be the mean and covariance of the aprior probability density function of <math>x</math>.
| |
| | |
| ==Examples==
| |
| ===Example 1===
| |
| We shall take a [[linear prediction]] problem as an example. Let a linear combination of observed scalar random variables <math>x_{1}, x_{2}</math> and <math>x_{3}</math> be used to estimate another future scalar random variable <math>x_{4}</math> such that <math>\hat x_{4}=\sum_{i=1}^{3}w_{i}x_{i}</math>. If the random variables <math>x=[x_{1},x_{2},x_{3},x_{4}]^{T}</math> are real Gaussian random variables with zero mean and its [[covariance matrix]] given by
| |
| :<math>
| |
| \operatorname{cov}(X)=E[xx^{T}]=\left[\begin{array}{cccc}
| |
| 1 & 2 & 3 & 4\\
| |
| 2 & 5 & 8 & 9\\
| |
| 3 & 8 & 6 & 10\\
| |
| 4 & 9 & 10 & 15\end{array}\right],</math>
| |
| then our task is to find the coefficients <math>w_{i}</math> such that it will yield an optimal linear estimate <math>\hat x_{4}</math>.
| |
| | |
| In terms of the terminology developed in the previous section, for this problem we have the observation vector <math>y = [x_1, x_2, x_3]^T</math>, the estimator matrix <math>W = [w_1, w_2, w_3]</math> as a row vector, and the estimated variable <math>x = x_4</math> as a scalar quantity. The autocorrelation matrix <math>C_Y</math> is defined as
| |
| :<math>C_Y=\left[\begin{array}{ccc}
| |
| E[x_{1},x_{1}] & E[x_{2},x_{1}] & E[x_{3},x_{1}]\\
| |
| E[x_{1},x_{2}] & E[x_{2},x_{2}] & E[x_{3},x_{2}]\\
| |
| E[x_{1},x_{3}] & E[x_{2},x_{3}] & E[x_{3},x_{3}]\end{array}\right]=\left[\begin{array}{ccc}
| |
| 1 & 2 & 3\\
| |
| 2 & 5 & 8\\
| |
| 3 & 8 & 6\end{array}\right].</math>
| |
| The cross correlation matrix <math>C_{YX}</math> is defined as
| |
| :<math>C_{YX}=\left[\begin{array}{c}
| |
| E[x_{4},x_{1}]\\
| |
| E[x_{4},x_{2}]\\
| |
| E[x_{4},x_{3}]\end{array}\right]=\left[\begin{array}{c}
| |
| 4\\
| |
| 9\\
| |
| 10\end{array}\right].</math>
| |
| | |
| We now solve the equation <math>C_Y W^T=C_{YX}</math> by inverting <math>C_Y</math> and pre-multiplying to get
| |
| :<math>C_Y^{-1}C_{YX}=\left[\begin{array}{ccc}
| |
| 4.85 & -1.71 & -.142\\
| |
| -1.71 & .428 & .2857\\
| |
| -.142 & .2857 & -.1429\end{array}\right]\left[\begin{array}{c}
| |
| 4\\
| |
| 9\\
| |
| 10\end{array}\right]=\left[\begin{array}{c}
| |
| 2.57\\
| |
| -.142\\
| |
| .5714\end{array}\right]=W^T.</math>
| |
| | |
| So we have <math>w_{1}=2.57,</math> <math>w_{2}=-.142,</math> and <math>w_{3}=.5714</math>
| |
| as the optimal coefficients for <math>\hat x_{4}</math>. Computing the minimum
| |
| mean square error then gives <math>\left\Vert e\right\Vert _{\min}^{2}=E[x_{4}x_{4}]-WC_{YX}=15-WC_{YX}=.2857</math>.<ref>Moon and Stirling.</ref> Note that it is not necessary to obtain an explicit matrix inverse of <math>C_Y</math> to compute the value of <math>W</math>. The matrix equation can be solved by well known methods such as [[Gauss elimination method]]. A shorter, non-numerical example can be found in [[orthogonality principle]].
| |
| | |
| ===Example 2===
| |
| Consider a vector <math>y</math> formed by taking <math>N</math> observations of a random scalar parameter <math>x</math> disturbed by white Gaussian noise. We can describe the process by a linear equation <math>y = 1x+ z</math>, where <math>1 = [1,1,\ldots,1]^T</math>. Depending on context it will be clear if <math>1</math> represents a [[Scalar (mathematics)|scalar]] or a vector. Let the aprior distribution of <math>x</math> be [[Uniform distribution (continuous)|uniform]] over an interval <math>[-x_0,x_0]</math>, and thus <math>x</math> will have variance of <math>\sigma_X^2 = x_0^2/3.</math>. Let the noise vector <math>z</math> be normally distributed as <math>N(0,\sigma_Z^2I)</math> where <math>I</math> is an identity matrix. Also <math>x</math> and <math>z</math> are independent and <math>C_{XZ} = 0</math>. It is easy to see that
| |
| :<math>
| |
| \begin{align}
| |
| & E\{y\} = 0, \\
| |
| & C_Y = E\{yy^T\} = \sigma_X^2 11^T + \sigma_Z^2I, \\
| |
| & C_{XY} = E\{xy^T\} = \sigma_X^2 1^T.
| |
| \end{align}
| |
| </math>
| |
| | |
| Thus, the linear MMSE estimator is given by
| |
| :<math>
| |
| \begin{align}
| |
| \hat{x} &= C_{XY}C_Y^{-1} y \\
| |
| &= \sigma_X^2 1^T(\sigma_X^2 11^T + \sigma_Z^2I)^{-1} y.
| |
| \end{align}
| |
| </math>
| |
| We can simplify the expression by using the alternative form for <math>W</math> as
| |
| :<math>
| |
| \begin{align}
| |
| \hat{x} &= (1^T \frac{1}{\sigma_Z^2}I 1 + \frac{1}{\sigma_X^2})^{-1} 1^T \frac{1}{\sigma_Z^2} I y \\
| |
| &= \frac{1}{\sigma_Z^2}( \frac{N}{\sigma_Z^2} + \frac{1}{\sigma_X^2})^{-1} 1^T y \\
| |
| &= \frac{\sigma_X^2}{\sigma_X^2 + \sigma_Z^2/N} \bar{y},
| |
| \end{align}
| |
| </math>
| |
| | |
| where for <math>y = [y_1,y_2,\ldots,y_N]^T </math> we have <math>\bar{y} = \frac{1^Ty}{N} = \frac{\sum_{i=1}^N y_i}{N}.</math>
| |
| | |
| Similarly, the variance of the estimator is
| |
| :<math>\sigma_{\hat{X}}^2 = C_{XY}C_Y^{-1}C_{YX} = \Big(\frac{\sigma_X^2}{\sigma_X^2 + \sigma_Z^2/N}\Big) \sigma_X^2.</math>
| |
| | |
| Thus the MMSE of this linear estimator is
| |
| :<math>\mathrm{LMMSE} = \sigma_X^2 - \sigma_{\hat{X}}^2 = \Big(\frac{\sigma_Z^2}{\sigma_X^2 + \sigma_Z^2/N}\Big) \frac{\sigma_X^2}{N}.</math>
| |
| | |
| For very large <math>N</math>, we see that the MMSE estimator of a scalar unknown random variable with uniform aprior distribution can be approximated by the arithmetic average of all the observed data
| |
| :<math>\hat{x} = \frac{1}{N}\sum_{i=1}^N y_i,</math>
| |
| while the variance will be unaffected by data <math>\sigma_{\hat{X}}^2 = \sigma_{X}^2,</math> and the LMMSE of the estimate will tend to zero.
| |
| | |
| However, the estimator is suboptimal since it is constrained to be linear.
| |
| | |
| === Example 3 ===
| |
| Consider a variation of the above example: Two candidates are standing for an election. Let the fraction of votes that a candidate will receive on an election day be <math>x \in [0,1].</math> Thus the fraction of votes the other candidate will receive will be <math>1-x.</math> We shall take <math>x</math> as a random variable with a uniform prior distribution over <math>[0,1]</math> so that its mean is <math>\bar{x} = 1/2 </math> and variance is <math>\sigma_X^2 = 1/12.</math> A few weeks before the election, two independent public opinion polls were conducted by two different pollsters. The first poll revealed that the candidate is likely to get <math>y_1</math> fraction of votes. Since some error is always present due to finite sampling and the particular polling methodology adopted, the first pollster declares their estimate to have an error <math>z_1</math> with zero mean and variance <math>\sigma_{Z_1}^2.</math> Similarly, the second pollster declares their estimate to be <math>y_2</math> with an error <math>z_2</math> with zero mean and variance <math> \sigma_{Z_2}^2. </math> Note that except for the mean and variance of the error, the error distribution is unspecified. How should the two polls be combined to obtain the voting prediction for the given candidate?
| |
| | |
| As with previous example, we have
| |
| :<math>
| |
| \begin{align}
| |
| y_1 &= x + z_1 \\
| |
| y_2 &= x + z_2.
| |
| \end{align}
| |
| </math>
| |
| | |
| Here both the <math>E\{y_1\} = E\{y_2\} = \bar{x} = 1/2</math>. Thus we can obtain the LMMSE estimate as the linear combination of <math>y_1</math> and <math>y_2</math> as
| |
| :<math> \hat{x} = w_1 (y_1 - \bar{x}) + w_2 (y_2 - \bar{x}) + \bar{x}, </math>
| |
| where the weights are given by
| |
| :<math>
| |
| \begin{align}
| |
| w_1 &= \frac{1/\sigma_{Z_1}^2}{1/\sigma_{Z_1}^2 + 1/\sigma_{Z_2}^2 + 1/\sigma_X^2}, \\
| |
| w_2 &= \frac{1/\sigma_{Z_2}^2}{1/\sigma_{Z_1}^2 + 1/\sigma_{Z_2}^2 + 1/\sigma_X^2}.
| |
| \end{align}
| |
| </math>
| |
| Here since the denominator term is constant, the poll with lower error is given higher weight in order to predict the election outcome. Lastly, the variance of the prediction is given by
| |
| :<math>
| |
| \sigma_{\hat{X}}^2 = \frac{1/\sigma_{Z_1}^2 + 1/\sigma_{Z_2}^2}{1/\sigma_{Z_1}^2 + 1/\sigma_{Z_2}^2 + 1/\sigma_X^2} \sigma_X^2 ,
| |
| </math>
| |
| which makes <math>\sigma_{\hat{X}}^2</math> smaller than <math>\sigma_X^2.</math>
| |
| | |
| ==See also==
| |
| *[[Bayesian estimator]]
| |
| *[[Mean squared error]]
| |
| *[[Least squares]]
| |
| *[[Minimum-variance unbiased estimator]] (MVUE)
| |
| *[[Orthogonality principle]]
| |
| *[[Wiener filter]]
| |
| *[[Kalman filter]]
| |
| *[[Linear prediction]]
| |
| *[[Zero forcing equalizer]]
| |
| | |
| ==Notes==
| |
| <references/>
| |
| | |
| ==Further reading==
| |
| * {{cite web
| |
| | last = Johnson
| |
| | first = D.
| |
| | title = Minimum Mean Squared Error Estimators
| |
| | publisher = Connexions
| |
| | url = http://cnx.rice.edu/content/m11267/latest/ Minimum Mean Squared Error Estimators
| |
| | accessdate = 8 January 2013}}
| |
| * {{Cite book
| |
| | last = Jaynes
| |
| | first = E.T.
| |
| | title = Probability Theory: The Logic of Science
| |
| | year = 2003
| |
| | publisher = Cambridge University Press
| |
| | isbn = 978-0521592710}}
| |
| * {{Cite book
| |
| | last = Bibby
| |
| | first = J.
| |
| | last2 = Toutenburg
| |
| | first2 = H.
| |
| | title = Prediction and Improved Estimation in Linear Models
| |
| | year = 1977
| |
| | publisher = Wiley
| |
| | isbn = 9780471016564 }}
| |
| * {{Cite book
| |
| | last = Lehmann
| |
| | first = E. L.
| |
| | coauthors = Casella, G.
| |
| | title = Theory of Point Estimation
| |
| | year = 1998
| |
| | publisher = Springer
| |
| | isbn = 0-387-98502-6
| |
| | edition = 2nd
| |
| | chapter = Chapter 4 }}
| |
| * {{Cite book
| |
| | last = Kay
| |
| | first = S. M.
| |
| | title = Fundamentals of Statistical Signal Processing: Estimation Theory
| |
| | year = 1993
| |
| | publisher = Prentice Hall
| |
| | isbn = 0-13-042268-1
| |
| | pages = 344–350 }}
| |
| * {{Cite book
| |
| | last = Luenberger
| |
| | first = D.G.
| |
| | title = Optimization by Vector Space Methods
| |
| | year = 1969
| |
| | publisher = Wiley
| |
| | edition = 1st
| |
| | chapter = Chapter 4, Least-squares estimation
| |
| | isbn = 978-0471181170 }}
| |
| * {{Cite book
| |
| | last = Moon
| |
| | first = T.K.
| |
| | last2 = Stirling
| |
| | first2 = W.C.
| |
| | title = Mathematical Methods and Algorithms for Signal Processing
| |
| | year = 2000
| |
| | publisher = Prentice Hall
| |
| | edition = 1st
| |
| | isbn = 978-0201361865 }}
| |
| *{{Cite book
| |
| | last = Van Trees
| |
| | first = H. L.
| |
| | title = Detection, Estimation, and Modulation Theory, Part I
| |
| | publisher = Wiley
| |
| | year = 1968
| |
| | location = New York
| |
| | isbn = 0-471-09517-6 }}
| |
| *{{Cite book
| |
| | last = Haykin
| |
| | first = S.O.
| |
| | title = Adaptive Filter Theory
| |
| | publisher = Prentice Hall
| |
| | year = 2013
| |
| | edition = 5th
| |
| | isbn = 978-0132671453 }}
| |
|
| |
| {{Use dmy dates|date=September 2010}}
| |
| | |
| {{DEFAULTSORT:Minimum Mean Square Error}}
| |
| [[Category:Statistical deviation and dispersion]]
| |
| [[Category:Estimation theory]]
| |
| [[Category:Signal processing]]
| |