Spectral density estimation

From formulasearchengine
Revision as of 19:37, 19 December 2013 by en>ChrisGualtieri (→‎Further reading: Remove stub template(s). Page is start class or higher. Also check for and do General Fixes + Checkwiki fixes using AWB)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

{{#invoke:Hatnote|hatnote}} {{ safesubst:#invoke:Unsubst||$N=Merge from |date=__DATE__ |$B= Template:MboxTemplate:DMCTemplate:Merge partner }} In statistical signal processing, the goal of spectral density estimation is to estimate the spectral density (also known as the power spectral density) of a random signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. The purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

SDE should be distinguished from the field of frequency estimation, which assumes a limited (usually small) number of generating frequencies plus noise and seeks to find their frequencies. SDE makes no assumption on the number of components and seeks to estimate the whole generating spectrum.


Techniques for spectrum estimation can generally be divided into parametric and non-parametric methods. The parametric approaches assume that the underlying stationary stochastic process has a certain structure which can be described using a small number of parameters (for example, using an auto-regressive or moving average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure.

Following is a partial list of spectral density estimation techniques:

Finite number of tones

Template:Cleanup merge Template:Expert-subject Frequency estimation is the process of estimating the complex frequency components of a signal in the presence of noise given assumptions about the number of the components.[1] This contrasts with the general methods above which does not make prior assumptions about the components.

The most common methods involve identifying the noise subspace to extract these components. The most popular methods of noise subspace based frequency estimation are Pisarenko's Method, MUSIC, the eigenvector solution, and the minimum norm solution.

For example, consider a signal, , consisting of a sum of complex exponentials in the presence of white noise, . This may be represented as


Thus, the power spectrum of consists of impulses in addition to the power due to noise.

The noise subspace methods of frequency estimation are based on eigen decomposition of the autocorrelation matrix into a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace.

Pisarenko's Method



Eigenvector Method

Minimum Norm

Single tone

If one only wants to estimate the single loudest frequency, one can use a pitch detection algorithm.

If one wants to know all the (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a discrete Fourier transform or some other Fourier-related transform.


  1. Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN 0-471-59431-8.

Further reading

  • {{#invoke:citation/CS1|citation

|CitationClass=book }}

  • {{#invoke:citation/CS1|citation

|CitationClass=book }}

  • P Stoica and R Moses, Spectral Analysis of Signals. Prentice Hall, NJ, 2005 (Chinese Edition, 2007). AVAILABLE FOR DOWNLOAD.