Centered nonagonal number: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Toshio Yamaguchi
→‎See also: adding natural number classes navbox
en>David Eppstein
source and clean up
 
Line 1: Line 1:
In [[signal processing]], the '''Wiener filter''' is a [[filter (signal processing)|filter]] used to produce an estimate of a desired or target random process by linear time-invariant filtering an observed noisy process, assuming known [[Stationary process|stationary]] signal and noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process.
Last week I woke up and realized - Today I have been single for a while and following much intimidation from pals I today locate myself signed up for web  [http://www.banburycrossonline.com luke bryan tour dates for 2014] dating. They assured me that there are lots of enjoyable, [http://www.dict.cc/englisch-deutsch/pleasant.html pleasant] and standard individuals to fulfill, so the pitch is gone by here!<br>I make an effort to maintain as toned as potential staying at the fitness center several times per week. I enjoy  [http://lukebryantickets.flicense.com Luke bryan 2014] my athletics and try to perform or see as many a possible. Being [http://search.Un.org/search?ie=utf8&site=un_org&output=xml_no_dtd&client=UN_Website_en&num=10&lr=lang_en&proxystylesheet=UN_Website_en&oe=utf8&q=wintertime+I%27ll&Submit=Go wintertime I'll]   [http://lukebryantickets.lazintechnologies.com jason aldean fan club] regularly at Hawthorn matches. Note:   [http://www.museodecarruajes.org luke bryan vip tickets] I've observed the carnage of wrestling fits at stocktake sales, In case that you contemplated shopping an athletics I really don't mind.<br>My buddies and fam are awesome and spending time together at pub gigabytes or dinners is consistently a necessity. As I come to realize that you can not have a nice conversation with all the sound I have never been into dance clubs. I additionally got two undoubtedly cheeky and quite cute canines that are always enthusiastic to meet up new individuals.<br><br><br><br>Check out my website; [http://lukebryantickets.pyhgy.com luke bryan tickets phoenix]
 
[[File:Wiener filter - my dog.JPG|thumb|upright=2|Application of the Wiener filter for noise suppression. (Left original. Middle image with noise. Right filtered image]]
 
==Description==
The goal of the Wiener filter is to filter out [[noise]] that has corrupted a signal. It is based on a [[statistical]] approach, and a more statistical account of the theory is given in the [[minimum mean-square error|MMSE estimator]] article.
 
Typical filters are designed for a desired [[frequency response]].  However, the design of the Wiener filter takes a different approach.  One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the [[LTI system theory|linear time-invariant]] filter whose output would come as close to the original signal as possible.  Wiener filters are characterized by the following:<ref name="Brown1996">{{cite book|last1=Brown|first1=Robert Grover|last2=Hwang|first2=Patrick Y.C.|year=1996|title=Introduction to Random Signals and Applied Kalman Filtering|edition=3|location=New York|publisher=John Wiley & Sons|ISBN=0-471-12839-2}}</ref>
# Assumption: signal and (additive) noise are stationary linear [[stochastic process]]es with known spectral characteristics or known [[autocorrelation]] and [[cross-correlation]]
# Requirement: the filter must be physically realizable/[[Causal system|causal]] (this requirement can be dropped, resulting in a non-causal solution)
# Performance criterion: [[minimum mean-square error]] (MMSE)
 
This filter is frequently used in the process of [[deconvolution]]; for this application, see [[Wiener deconvolution]].
 
==Wiener filter problem setup==
The input to the Wiener filter is assumed to be a signal, <math>\scriptstyle s(t)</math>, corrupted by additive noise, <math>\scriptstyle n(t)</math>.  The output, <math>\scriptstyle\hat{s}(t)</math>, is calculated by means of a filter, <math>\scriptstyle g(t)</math>, using the following [[convolution]]:<ref name="Brown1996" />
:<math>\hat{s}(t) = (g * [s + n])(t) = \int\limits_{-\infty}^{\infty}{g(\tau)\left[s(t - \tau) + n(t - \tau)\right]\,d\tau},</math>
where <math>\scriptstyle s(t)</math> is the original signal (not exactly known; to be estimated), <math>\scriptstyle n(t)</math> is the noise, <math>\scriptstyle \hat{s}(t)</math> is the estimated signal (the intention is to equal <math>\scriptstyle s(t + \alpha)</math>), and <math>\scriptstyle g(t)</math> is the Wiener filter's [[impulse response]].
 
The error is defined as
: <math>e(t) = s(t + \alpha) - \hat{s}(t),</math>
where <math>\scriptstyle\alpha</math> is the delay of the Wiener Filter (since it is [[Causality (physics)|causal]]). In other words, the error is the difference between the estimated signal and the true signal shifted by <math>\scriptstyle\alpha</math>.
 
The squared error is
:<math>e^2(t) = s^2(t + \alpha) - 2s(t + \alpha)\hat{s}(t) + \hat{s}^2(t),</math>
where <math>\scriptstyle s(t \,+\, \alpha)</math> is the desired output of the filter and <math>\scriptstyle e(t)</math> is the error. Depending on the value of <math>\scriptstyle\alpha</math>, the problem can be described as follows:
* if <math>\scriptstyle\alpha \,>\, 0</math> then the problem is that of [[prediction]] (error is reduced when <math>\scriptstyle\hat{s}(t)</math> is similar to a later value of ''s''),
* if <math>\scriptstyle\alpha \,=\, 0</math> then the problem is that of [[filter (signal processing)|filter]]ing (error is reduced when <math>\scriptstyle\hat{s}(t)</math> is similar to <math>\scriptstyle s(t)</math>), and
* if <math>\scriptstyle\alpha \,<\, 0</math> then the problem is that of [[smoothing]] (error is reduced when <math>\scriptstyle\hat{s}(t)</math> is similar to an earlier value of ''s'').
 
Taking the [[expected value]] of the squared error results in
:<math>\mathrm{E}(e^2) = R_s(0) - 2\int\limits_{-\infty}^{\infty}{g(\tau)R_{xs}(\tau + \alpha)\,d\tau} + \iint\limits^{[\infty, \infty]}_{[-\infty, -\infty]}{g(\tau)g(\theta)R_x(\tau - \theta)\,d\tau\,d\theta},</math>
where <math>\scriptstyle x(t) \,=\, s(t) \,+\, n(t)</math> is the observed signal, <math>\scriptstyle R_s</math> is the [[autocorrelation]] function of <math>\scriptstyle s(t)</math>, <math>\scriptstyle R_x</math> is the [[autocorrelation]] function of <math>\scriptstyle x(t)</math>, and <math>\scriptstyle R_{xs}</math> is the [[cross-correlation]] function of <math>\scriptstyle x(t)</math> and <math>\scriptstyle s(t)</math>. If the signal <math>\scriptstyle s(t)</math> and the noise <math>\scriptstyle n(t)</math> are uncorrelated (i.e., the cross-correlation <math>\scriptstyle R_{sn}</math> is zero), then this means that <math>\scriptstyle R_{xs} \,=\, R_s</math> and <math>\scriptstyle R_x \,=\, R_s \,+\, R_n</math>. For many applications, the assumption of uncorrelated signal and noise is reasonable.
 
The goal is to minimize <math>\scriptstyle \mathrm{E}(e^2)</math>, the expected value of the squared error, by finding the optimal <math>\scriptstyle g(\tau)</math>, the Wiener filter [[impulse response function]]. The minimum may be found by calculating the first order incremental change in the [[Least squares|least square]] resulting from an incremental change in <math>\scriptstyle g</math> for positive time. This is
 
:<math> \delta \mathrm{E}(e^2) = -2 \int\limits_{-\infty}^{\infty}{\delta g(\tau)\left(R_{xs}(\tau + \alpha)- \int\limits_{0}^{\infty} {g(\theta) R_{x}(\tau - \theta)d\theta}\right)} d\tau.</math>
 
For a minimum, this must vanish identically for all <math>\scriptstyle \delta g(\tau)</math> for <math>\scriptstyle \tau>0</math> which leads to the [[Wiener–Hopf equation]]:
 
:<math> R_{xs}(\tau + \alpha) = \int\limits_{0}^{\infty} {g(\theta) R_{x}(\tau - \theta)d\theta}.</math>
 
This is the fundamental equation of the Wiener theory. The right-hand side resembles a convolution but is only over the semi-infinite range. The equation can be solved to find the optimal filter <math>\scriptstyle g</math> by a special technique due to Wiener and [[Eberhard Hopf|Hopf]].
 
==Wiener filter solutions==
The Wiener filter problem has solutions for three possible cases:  one where a noncausal filter is acceptable (requiring an infinite amount of both past and future data), the case where a [[causal system|causal]] filter is desired (using an infinite amount of past data), and the [[finite impulse response]] (FIR) case where a finite amount of past data is used. The first case is simple to solve but is not suited for real-time applications.  Wiener's main accomplishment was solving the case where the causality requirement is in effect, and in an appendix of Wiener's book [[Norman Levinson|Levinson]] gave the FIR solution.
 
===Noncausal solution===
 
:<math>G(s) = \frac{S_{x,s}(s)}{S_x(s)}e^{\alpha s}.</math>
 
Where  <math>\scriptstyle S</math> are spectra.  Provided that <math>\scriptstyle g(t)</math> is optimal, then the [[minimum mean-square error]] equation reduces to
:<math>E(e^2) = R_s(0) - \int_{-\infty}^{\infty}{g(\tau)R_{x,s}(\tau + \alpha)\,d\tau},</math>
 
and the solution <math>\scriptstyle g(t)</math> is the inverse two-sided [[Laplace transform]] of <math>\scriptstyle G(s)</math>.
 
===Causal solution===
 
:<math>G(s) = \frac{H(s)}{S_x^{+}(s)},</math>
 
where
*<math>\scriptstyle H(s)</math> consists of the causal part of <math>\scriptstyle \frac{S_{x,s}(s)}{S_x^{-}(s)}e^{\alpha s}</math> (that is, that part of this fraction having a positive time solution under the inverse Laplace transform)
*<math>\scriptstyle S_x^{+}(s)</math> is the causal component of <math>\scriptstyle S_x(s)</math> (i.e., the inverse Laplace transform of <math>\scriptstyle S_x^{+}(s)</math> is non-zero only for <math>\scriptstyle t \,\ge\, 0</math>)
*<math>\scriptstyle S_x^{-}(s)</math> is the anti-causal component of <math>\scriptstyle S_x(s)</math> (i.e., the inverse Laplace transform of <math>\scriptstyle S_x^{-}(s)</math> is non-zero only for <math>\scriptstyle t < 0</math>)
 
This general formula is complicated and deserves a more detailed explanation.  To write down the solution <math>\scriptstyle G(s)</math> in a specific case, one should follow these steps:<ref>{{cite web|last=Welch|first=Lloyd R|title=Wiener–Hopf Theory|url=http://csi.usc.edu/PDF/wienerhopf.pdf}}</ref>
 
# Start with the spectrum <math>\scriptstyle S_x(s)</math> in rational form and factor it into causal and anti-causal components:
 
<dl><dd><math>S_x(s) = S_x^{+}(s) S_x^{-}(s)</math></dd></dl>
 
where <math>\scriptstyle S^{+}</math> contains all the zeros and poles in the left half plane (LHP) and <math>\scriptstyle S^{-}</math> contains the zeroes and poles in the right half plane (RHP). This is called the [[Wiener–Hopf method|Wiener–Hopf factorization]].
# Divide <math>\scriptstyle S_{x,s}(s)e^{\alpha s}</math> by <math>\scriptstyle S_x^{-}(s)</math> and write out the result as a partial fraction expansion.
# Select only those terms in this expansion having poles in the LHP.  Call these terms <math>\scriptstyle H(s)</math>.
# Divide <math>\scriptstyle H(s)</math> by <math>\scriptstyle S_x^{+}(s)</math>.  The result is the desired filter transfer function <math>\scriptstyle G(s)</math>.
 
==Finite impulse response Wiener filter for discrete series==
[[Image:Wiener block.svg|350px|right|thumb|Block diagram view of the FIR Wiener filter for discrete series. An input signal ''w''[''n''] is convolved with the Wiener filter ''g''[''n''] and the result is compared to a reference signal ''s''[''n''] to obtain the filtering error ''e''[''n''].]]
The causal [[finite impulse response]] (FIR) Wiener filter, instead of using some given data matrix X and output vector Y, finds optimal tap weights by using the statistics of the input and output signals. It populates the input matrix X with estimates of the auto-correlation of the input signal (T) and populates the output vector Y with estimates of the cross-correlation between the output and input signals (V).
 
In order to derive the coefficients of the Wiener filter, consider the signal ''w''[''n''] being fed to a Wiener filter of order ''N'' and with coefficients <math>\scriptstyle \{a_i\}</math>, <math>\scriptstyle i \,=\, 0,\, \ldots,\, N</math>. The output of the filter is denoted ''x''[''n''] which is given by the expression
 
:<math>x[n] = \sum_{i=0}^N a_i w[n-i] .</math>
 
The residual error is denoted ''e''[''n''] and is defined as ''e''[''n''] = ''x''[''n'']&nbsp;&minus;&nbsp;''s''[''n''] (see the corresponding block diagram). The Wiener filter is designed so as to minimize the mean square error ([[Minimum mean square error|MMSE]] criteria) which can be stated concisely as follows:
 
:<math>a_i = \arg \min ~E\{e^2[n]\} ,</math>
 
where <math>\scriptstyle E\{\cdot\}</math> denotes the expectation operator. In the general case, the coefficients <math>\scriptstyle a_i</math> may be complex and may be derived for the case where ''w''[''n''] and ''s''[''n''] are complex as well. With a complex signal, the matrix to be solved is a [[Hermitian matrix|Hermitian]] [[Toeplitz matrix]], rather than [[Symmetric matrix|symmetric]] [[Toeplitz matrix]].  For simplicity, the following considers only the case where all these quantities are real. The mean square error (MSE) may be rewritten as:
 
:<math>
\begin{array}{rcl}
E\{e^2[n]\} &=& E\{(x[n]-s[n])^2\}\\
&=& E\{x^2[n]\} + E\{s^2[n]\} - 2E\{x[n]s[n]\}\\
&=& E\{\big( \sum_{i=0}^N a_i w[n-i] \big)^2\} + E\{s^2[n]\} - 2E\{\sum_{i=0}^N a_i w[n-i]s[n]\} .
\end{array}
</math>
 
To find the vector <math>\scriptstyle [a_0,\, \ldots,\, a_N]</math> which minimizes the expression above, calculate its derivative with respect to <math>\scriptstyle a_i</math>
 
:<math>
\begin{array}{rcl}
\frac{\partial}{\partial a_i} E\{e^2[n]\} &=& 2E\{ \big( \sum_{j=0}^N a_j w[n-j] \big) w[n-i] \} - 2E\{s[n]w[n-i]\} \quad i=0,\, \ldots,\, N\\
&=& 2 \sum_{j=0}^N E\{w[n-j]w[n-i]\} a_j - 2E\{ w[n-i]s[n]\} .
\end{array}
</math>
 
Assuming that ''w''[''n''] and ''s''[''n''] are each stationary and jointly stationary, the sequences <math>\scriptstyle R_w[m]</math> and <math>\scriptstyle R_{ws}[m]</math> known respectively as the autocorrelation of ''w''[''n''] and the cross-correlation between ''w''[''n''] and ''s''[''n''] can be defined as follows:
 
:<math>
\begin{align}
R_w[m] =& E\{w[n]w[n+m]\} \\
R_{ws}[m] =& E\{w[n]s[n+m]\} .
\end{align}
</math>
 
The derivative of the MSE may therefore be rewritten as (notice that <math>\scriptstyle R_{ws}[-i] \,=\, R_{sw}[i]</math>)
 
:<math>\frac{\partial}{\partial a_i} E\{e^2[n]\} = 2 \sum_{j=0}^{N} R_w[j-i] a_j - 2 R_{sw}[i] \quad i = 0,\, \ldots,\, N .</math>
 
Letting the derivative be equal to zero results in
 
:<math>\sum_{j=0}^N R_w[j-i] a_j = R_{sw}[i] \quad i = 0,\, \ldots,\, N ,</math>
 
which can be rewritten in matrix form
 
:<math>\begin{align}
&\mathbf{T}\mathbf{a} = \mathbf{v}\\
 
\Rightarrow
&\begin{bmatrix}
R_w[0] & R_w[1] & \cdots & R_w[N] \\
R_w[1] & R_w[0] & \cdots & R_w[N-1] \\
\vdots & \vdots & \ddots & \vdots \\
R_w[N] & R_w[N-1] & \cdots & R_w[0]
\end{bmatrix}
 
\begin{bmatrix}
a_0 \\ a_1 \\ \vdots \\ a_N
\end{bmatrix}
 
=
 
\begin{bmatrix}
R_{sw}[0] \\R_{sw}[1] \\ \vdots \\ R_{sw}[N]
\end{bmatrix}
 
\end{align}</math>
 
These equations are known as the [[Wiener–Hopf equations]]. The matrix T appearing in the equation is a symmetric [[Toeplitz matrix]]. Under suitable conditions on <math>R</math>, these matrices are known to be positive definite and therefore non-singular yielding a unique solution to the determination of the Wiener filter coefficient vector, <math>\scriptstyle\mathbf{a} \,=\, \mathbf{T}^{-1}\mathbf{v}</math>. Furthermore, there exists an efficient algorithm to solve such Wiener–Hopf equations known as the [[Levinson recursion|Levinson-Durbin]] algorithm so an explicit inversion of <math>\scriptstyle\mathbf{T}</math> is not required.
 
===Relationship to the least mean squares filter===
The realization of the causal Wiener filter looks a lot like the solution to the [[least squares]] estimate, except in the signal processing domain. The least squares solution, for input matrix <math>\scriptstyle\mathbf{X}</math> and output vector <math>\scriptstyle\mathbf{y}</math> is
 
:<math>\boldsymbol{\hat\beta} = (\mathbf{X} ^\mathbf{T}\mathbf{X})^{-1}\mathbf{X}^{\mathbf{T}}\boldsymbol y .</math>
 
The FIR Wiener filter is related to the [[least mean squares filter]], but minimizing the error criterion of the latter does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution.
 
==Applications==
The Wiener filter can be used in image processing to remove noise from a picture. For example, using the Mathematica function:
<code>WienerFilter[image,2]</code> on the first image on the right, produces the filtered image below it.
[[File:Astronaut-noise.png|thumb|Noisy image of astronaut.]]
[[File:Astronaut-denoised.png|thumb|Noisy image of astronaut after Wiener filter applied.]]
 
It is commonly used to denoise audio signals, especially speech, as a preprocessor before [[speech recognition]].
 
== History ==
The filter was proposed by [[Norbert Wiener]] during the 1940s and published in 1949.<ref name="Wiener1949">{{cite book|last=Wiener|first=Norbert|year=1949|title=Extrapolation, Interpolation, and Smoothing of Stationary Time Series|location=New York|publisher=Wiley|ISBN=0-262-73005-7}}</ref>   The discrete-time equivalent of Wiener's work was derived independently by [[Andrey Kolmogorov]] and published in 1941. Hence the theory is often called the '''Wiener–Kolmogorov''' filtering theory.  The Wiener filter was the first statistically designed filter to be proposed and subsequently gave rise to many others including the famous [[Kalman filter]].
 
==See also==
*[[Norbert Wiener]]
*[[Kalman filter]]
*[[Wiener deconvolution]]
*[[Eberhard Hopf]]
*[[Least mean squares filter]]
*[[Similarities between Wiener and LMS]]
*[[Linear prediction]]
*[[Minimum mean square error|MMSE estimator]]
 
==References==
{{reflist}}
* [[Thomas Kailath]], [[Ali H. Sayed]], and [[Babak Hassibi]], Linear Estimation, Prentice-Hall, NJ, 2000, ISBN 978-0-13-022464-4.
* Wiener N: The interpolation, extrapolation and smoothing of stationary time series', Report of the Services 19, Research Project DIC-6037 MIT, February 1942
* Kolmogorov A.N: 'Stationary sequences in Hilbert space', (In Russian) Bull. Moscow Univ. 1941 vol.2 no.6 1-40. English translation in Kailath T. (ed.) ''Linear least squares estimation'' Dowden, Hutchinson & Ross 1977
 
==External links==
*Mathematica [http://reference.wolfram.com/mathematica/ref/WienerFilter.html WienerFilter] function
 
{{DEFAULTSORT:Wiener Filter}}
[[Category:Linear filters]]
[[Category:Estimation theory]]
[[Category:Stochastic processes]]
[[Category:Time series analysis]]
[[Category:Image noise reduction techniques]]

Latest revision as of 23:57, 18 November 2014

Last week I woke up and realized - Today I have been single for a while and following much intimidation from pals I today locate myself signed up for web luke bryan tour dates for 2014 dating. They assured me that there are lots of enjoyable, pleasant and standard individuals to fulfill, so the pitch is gone by here!
I make an effort to maintain as toned as potential staying at the fitness center several times per week. I enjoy Luke bryan 2014 my athletics and try to perform or see as many a possible. Being wintertime I'll jason aldean fan club regularly at Hawthorn matches. Note: luke bryan vip tickets I've observed the carnage of wrestling fits at stocktake sales, In case that you contemplated shopping an athletics I really don't mind.
My buddies and fam are awesome and spending time together at pub gigabytes or dinners is consistently a necessity. As I come to realize that you can not have a nice conversation with all the sound I have never been into dance clubs. I additionally got two undoubtedly cheeky and quite cute canines that are always enthusiastic to meet up new individuals.



Check out my website; luke bryan tickets phoenix