# Talk:Low-pass filter

## Old talk not previously in any section

Can someone add an example and disscusiuon on the transfer function of a low pass filter? mickpc

I have added the Low Pass Filter Image, however am not sure how to scale it, anyone feel free to do so.

this page should have some dicussion of LPFs' uses in electronic music. - mhjb

What do you want to know? 213.64.153.109 22:04 Dec 10, 2002 (UTC)

I want to know if it's the high-pass or the low-pass which filters tape hiss.
Tape hiss is very highfrequent hence you need too block out these high frequensies. Therefor you need a lowpass filter. /same guy as before

The following content was at Lowpass filter by 195.145.245.249:

A filter used, amongst others, in sound synthesis, that only lets pass waves below its cutoff frequency. With analog realizations of lowpass filters, the cutoff of higher frequencies is gradual, with frequencies being dampened increasingly the higher they get. Typical values for this slew rate are 12 dB or 24 dB per octave, meaning that a signal one octave above the cutoff frequency will be dampened by 12/24 dB.

Dori | Talk 17:06, Dec 2, 2003 (UTC)

slew rate is wrong and everything else is already included. - Omegatron 14:31, Dec 31, 2004 (UTC)

"The Bode plot for this type of filter resembles that of a first-order filter, except that it falls off quadratically instead of linearly."

That depends on the type of filter. A Butterworth of any order will look like a straight line, too. - Omegatron

Moved from article, awaiting clarification from contributor (see also, Talk:High-pass filter). --Lexor|Talk 13:10, 24 Jun 2004 (UTC)

There is often irritation when you activly want to cut high frequencies with a low pass.

removed from article:

Capacitors naturally resist changes in voltage.
It is this natural resistance (not to be confused with Ohmic resistance) that the functionality of the low-pass filter is realized.
• With low-frequencies, the voltage to the capacitor changes slowly and provides sufficient time for the capacitor to change voltage through the current-voltage relationship ${\displaystyle I=C{\frac {dV}{dt}}}$.
• For high-frequencies, the voltage to the capacitor changes too fast for sufficient charge to build up in the capacitor to change the voltage.
This understanding is rooted in the concept of reactance where the capacitor will naturally block DC but pass AC.
Taking a more fluidic vision of this passive circuit, then if the capacitor blocks DC then it must "flow out" the path marked ${\displaystyle V_{out}}$ (analogous to removing the capacitor).
If the capacitor passes AC then it "flows out" the path where the capacitor effectively short circuiting ${\displaystyle V_{out}}$ with ground (analogous to replacing the capacitor with just a wire).

## Filter circuit question

FTA: "At higher frequencies the reactance drops, and the capacitor effectively functions as a short circuit." -- shouldn't this be an "open" circuit? A short would tie Vout to ground, thus eliminating all frequencies. (129.42.208.182)

No, it's a short circuit; at high frequencies, the output is tied to ground. So it doesn't pass high frequencies. That's why it's a low-pass filter, right? Pfalstad 22:36, 24 October 2005 (UTC)

## Slope

Why does the text show a cutoff of 6 dB per octave while the bode plot shows 20 dB per octave? Shouldn't it be 20 dB per octave in the text also? If the frequency doubles, there is a change of 6 dB, but an octave is a factor 10 change in frequency.

The bode plot shows 20 dB per decade, not octave. An octave is a doubling in frequency; a decade is 10x frequency change. It would be nice if the text said something about 20 dB per decade, since the diagram emphasizes that. Pfalstad 12:40, 25 January 2006 (UTC)

## math tags

I've edited the "Passive digital realization" section to use math tags. It would be swell if someone could check my work, then delete the text-only version if I've got it right.

I know the intention of the section is to show that the new output is determined by the previous output, but this isn't made clear by the representation, even in the text-only version. Right now, ${\displaystyle y}$ appears on both sides of the equation. What's the best way to fix this? -- Mikeblas 15:36, 28 July 2006 (UTC)

Fixed... Anyway i'm not so sure about the correctness of the first (alpha) formula. Two sources report that the correct value is ${\displaystyle \alpha =e^{-2\pi f_{c}}}$ where ${\displaystyle f_{c}={\frac {f_{-3\,\mathrm {dB} }}{f_{sample}}}}$... I'll investigate more. Anyway, why the first formula is definitely bigger? (solved, was just the second, simpler, one wasn't rendered in png 10:11, 16 September 2006 (UTC)) Danilo Roascio 17:39, 14 September 2006 (UTC)

## Why only electronc types?

There are mehanical and acoustic (optical?) low pass filters. Should this page mention them? I think so.--Tugjob 15:25, 22 June 2007 (UTC)

Yes, maybe a section on "other domains" or something. Dicklyon 16:56, 22 June 2007 (UTC)

I think a very general description of low pass filtering (in all its forms) and then the different types would be appropriate. Any volunteers?--Tugjob 23:26, 12 July 2007 (UTC)

## Confusing Paragraph

If someone could enumerate a little more on this paragraph:

However, this filter is not realizable for practical, real signals because the sinc function extends to infinity. The filter would therefore need to predict the future and have infinite knowledge of the past in order to perform the convolution. It is effectively realizable for pre-recorded digital signals (by padding the ends of the signal with zeros to the point that the error after filtering is less than the quantization error), or perfectly cyclic signals that repeat for infinity.

It is interesting, but confusing to someone with only slight background of the area.

See if my edits helped. Dicklyon 19:08, 26 June 2007 (UTC)

## Attenuates versus reduces

See definitions of attenuate. They usually involve "the level of" (or the strength, power, magnitude, etc.). The "reduces" currently there in parens is not a great synonym, because it leads to edits such as I just reverted with "reduces the frequencies". The trouble with this is that frequencies are not what is changed. What is reduced is the level of the frequency components. An edit that reflects this understanding would be welcome. Dicklyon 23:05, 12 July 2007 (UTC)

## Realizable for finite, or for infinite, signals?

BrianWilloughby, I concur that "this second paragraph seems to have a history of misinterpretation," but I'm not sure I agree with you on what the right statement should be. Seems to me that when the signal is of finite duration, then the filtering with the infinite impulse response is possible, or realizable; whereas if the input signal is of infinite duration, then it would require both an infinite amount of computation and an infinite delay, and the ideal filter is therefore NOT realiable for such signals. What's your interpretation? Dicklyon 22:46, 12 August 2007 (UTC)

I'm glad you questioned this because the edit caught my eye too. It occurs to me that Brian must be speaking to the infinite duration of the sinc signal required for the convolution while the orginal version of the paragraph was speaking to the finite duration of the signal to be filtered. Alfred Centauri 23:06, 12 August 2007 (UTC)

As far as my understanding goes an ideal filter is *perfectly* realizable in the frequency domain on a digital signal of finite length but not *desirable* due to the gibbs effect.

Since I am involved primarily with vision systems I will be talking about length and the spatial-domain instead of duration and the time-domain.
Take for example a snapshot of a square wave across a horizontal scanline in the spatial-domain (for simplicity assume only 1-Dimension). In this scenario the pixel intensity is a function of the pixel coordinate and since we are talking about a square wave this would be a dark segment followed by a bright segment followed by a dark segment. There are two sharp transitions (edges), dark to bright and bright to dark. Say we want to smooth these transitions (soften the edges) with the help of an ideal low-pass filter constructed in the frequnecy domain. Its frequnecy response would look like a step function and its phase response would be zero at all all frequencies. Assume that the length of our scanline is 1024 pixels then we would need exactly 513 frequencies (due the redundancies in the output of a DFT on real input) in order to fully reconstruct the scanline in the spatial domain with absolutely no no loss of information.

If we take the FFT of our scanline and apply the ideal low-pass filter by multiplication in the frequency domain and then do the inverse FFT to go back to the spatial domain the resulting scanine will have overshoots and ringing arround the edges of the square wave. This happens because we have thrown away alot of the high frequencies that would be otherwise needed for its full reconstruction. As said earlier we need exaclty 513 unique frequencies to fully reconstruct this wave but we are now effectively using much less. Of course, our purpose is not reconstruction but smoothing. The ideal low-pass filter does exactly what it is supposed to do and it is 100% accurate. The problem is that the ideal low-pass is not suitable for this application. We instead need to keep some higher frequencies to reduce overshoot and ringing at the edges. In this case a gaussian low-pass would do a much better job.

Please refer to the "Gibbs phenomenon" article for more information. Also forgive me and correct me if I am wrong or if the above explanation is not applicable to other fields.erm 13:50, 5 October 2007 (UTC)

## Passive electronic realization

In the section "Passive electronic realization" it states that DC can not flow through the capacitor and AC can. It is to my understanding , excluding leakage, that no current flows through the capacitor. The charge is stored on one of the plates depending on the direction of the current. When the current changes direction the charge is released on the respective plate and built up on the opposite plate.

Rrace001 15:05, 25 October 2007 (UTC)

That's technically correct. Current does not actually flow from one plate of the capacitor, across the dielectric, and onto the other plate. But for all practical purposes a capacitor can be analyzed as if the current did flow in this way. It's long-standing practice to state that AC current flows through a capacitor... though it's definitely confusing to those who are just learning about electronics. Mathematically the reactance of a capacitor in an AC circuit is 1/(2*pi*f*C), so at very high frequencies it's essentially a short circuit. ǝɹʎℲxoɯ (contrib) 15:15, 25 October 2007 (UTC)
I'd say it's not technically correct. As with other two-terminal devices, the current flowing through a device is the current flowing into one terminal and out the other. What happens inside is just a distraction for you. And DC current can flow throught a capacitor; but it takes a linear voltage ramp to make that happen, and it may be hard to sustain that for a long time; there certainly are applications, such as integrators in photodetector cells, where the voltage that represents the integral of a DC current is what you care about. Dicklyon 22:14, 25 October 2007 (UTC)
I think we basically agree. In standard electronics usage, the current through a device is defined as you express it. However, this definition is somewhat contradictory to the everyday meaning of "through" which means "in one side, across the middle by some path, out the other side." ǝɹʎℲxoɯ (contrib) 00:46, 26 October 2007 (UTC)
Are you concerned that the electrons that come out are not the same ones that went in? Are you not believing that electrons are indistinguishable, and hence that question has no meaning? Dicklyon 01:30, 26 October 2007 (UTC)
Well... I work in a nanoelectronics lab where I try to understand novel semiconductors, so I'm well-aware that electrons are indistinguishable :-) I'm just saying that the conventional, everyday notion of flow "through" something doesn't intuitively match up with what goes on in a capacitor. I remember having trouble with that notion when playing with my first electronics kit when I was 10 or 11 and all the way till I learned about circuit design in college. This article isn't necessarily the best place to clarify that, but maybe the capacitor article could? ǝɹʎℲxoɯ (contrib) 02:38, 26 October 2007 (UTC)

## Electronic low-pass filters

I have removed my question Karlwalton (talk) 15:23, 30 March 2008 (UTC)

## Bibliography?

Can somebody add bibliography entries? I'm specially interested in the Digital simulation part, as I would like to reference it in my thesis. —Preceding unsigned comment added by Dennisberger (talkcontribs) 13:17, 20 August 2008 (UTC)

## A few edits in the discrete-time section

Corrected a typo in the equation with the comments above the terms (wish these equations had numbers). Changed the indexing variable from "n" to "i" to reduce confusion-- don't see why "n" is needed (?). Added a comment and wikilink re the equivalence of this IIR filter to a EWMA. I plan to do some contributions on the EWMA's statistical aspects later (on the Expo Smoothing page, not here). Will probably add some more material regarding the conversion from "s" to "z" with a primary reference of Oppenheim and Shafer, although this stuff is in any sig proc book and some refs should be provided (see the talk entry right above this one...) I'm particularly keen on making the connection between two groups of folks who, in my 40+ years of technical work, don't talk much to each other-- that would be your EE's and your statisticians. Each group has much to offer the other. Rb88guy (talk) 21:35, 6 January 2009 (UTC)

Made some slight modifications and copyedits to your recent changes. —TedPavlic | (talk) 22:43, 6 January 2009 (UTC)

## Linguistic question on low-pass filters

Can anyone tell me what it means if someone says "be careful with what John says, you have to apply a low-pass filter to what he says"? I assume in this context it means that "John" says lots of things, lots of noise, and applying a "low-pass filter" would result in getting the true meaning of what's being said? —Preceding unsigned comment added by 94.247.216.78 (talk) 09:49, 28 May 2009 (UTC)

## Phase Shift of LPF ?

There's a lot of material here on the gain of the LPF but only the most superficial coverage of phase shift. The article should at least have a plot of phase shift vs frequency and maybe a few key values. 203.206.220.108 (talk) —Preceding undated comment added 13:54, 18 June 2009 (UTC).

## Can I put this expample (integrator)?

Why integration is a low-pass filter?

The integration of sin(Wt) produces vout = -1/W cos(wt), that is W times weaker signal. --Javalenok (talk) 14:40, 25 July 2009 (UTC)

## 6 dB!

"A first-order filter, for example, will reduce the signal amplitude by half (so power reduces by 6 dB) every time the frequency doubles (goes up one octave); more precisely, the power rolloff approaches 20 dB per decade in the limit of high frequency. "

I might not be correct, but shouldn't that sentence say "voltage/amplitude reduces by 6 dB" ? —Preceding unsigned comment added by 94.237.36.18 (talk) 14:08, 30 January 2010 (UTC)

You are incorrect. Regardless of units, ${\displaystyle \log _{10}(0.5)\approx -0.3}$ and ${\displaystyle \log _{10}(0.25)\approx -0.6}$. When the signal amplitude is cut in half, the power (signal squared) is quartered. Hence, every octave (when frequency is doubled), the first-order filter reduces amplitude by 0.5, and the signal power reduces by 0.25. The statement as stated is correct. —TedPavlic (talk/contrib/@) 16:10, 31 January 2010 (UTC)

Continuing: According to definitions in http://en.wikipedia.org/wiki/Decibel , dividing voltage to half means in decibels -6dB —Preceding unsigned comment added by 94.237.36.18 (talk) 14:25, 30 January 2010 (UTC)

That section is a mess. Looks like a cleanup is in order. Alfred Centauri (talk) 14:19, 30 January 2010 (UTC)
Perhaps you are reading the section too quickly. The content appears fine, but perhaps its ordering and juxtaposition of facts is not consistent with what a conventional reader is expecting, and that might be what is causing the confusion. —TedPavlic (talk/contrib/@) 16:10, 31 January 2010 (UTC)
I agree, the factual content is not at issue, it is the presentation. IMHO, it is a mess for the very reason that it is confusing and may even appear contradictory to some readers. If I find the time, I may work on this section. Alfred Centauri (talk) 23:20, 31 January 2010 (UTC)

Continuing 2: I think the graph http://en.wikipedia.org/wiki/File:Butterworth_response.svg also mixes two different things: 3dB is in power and 20dB/decade is in voltages. —Preceding unsigned comment added by 94.237.36.18 (talk) 14:35, 30 January 2010 (UTC)

I agree that the section is confusing but I think you might be confused too (or heck, maybe I'm confused?). Think of the transfer function for a 1st order LPF. At the cutoff frequency, the voltage ratio is 0.707 (-3dBV) and the power ratio is 0.5 (-3dbW), right?
Makes sense, but it confuses people, because the -3dB is often called as the half-power point. Also, if I'm not completely confused here, the picturetext refers to the Y-axis as the "power gain" and the -20dB/decade in the picture is in voltages anyway?--94.237.36.18 (talk) 15:00, 30 January 2010 (UTC)
Any measurement given in deciBels is implicitly describing power (for several mathematical and practical reasons). The deciBel of any ratio is given by ${\displaystyle 10\log _{10}({\text{ratio}})}$. When the ratio is an amplitude, it must be squared to give its power. Note that ${\displaystyle 10\log _{10}({\text{ratio}}^{2})=20\log _{10}({\text{ratio}})}$. The half-power point is the point where ${\displaystyle {\text{ratio}}^{2}}$ is halved, and hence where ${\displaystyle {\text{ratio}}}$ decreases by ${\displaystyle 1/{\sqrt {2}}}$. The 20dB/decade does refer to power. For a first-order filter, the amplitude decreases by 1/10 every decade, and hence the power decreases by 1/100. It is the case that ${\displaystyle \log _{10}(1/100)\approx -2}$. So the ratios/numbers/language in the article is presently correct. —TedPavlic (talk/contrib/@) 16:10, 31 January 2010 (UTC)

## Integrator

I have challenged the assertion that an integrator is a low-pass filter on the talk page of the editor who reverted me. Since they have not responded I am copying the entire thread here for comment. SpinningSpark 06:20, 9 April 2010 (UTC)

Regarding your reversion, of course it makes a difference what it was designed for. That's a bit like saying a brick is an example of a paperweight. No it isn't, you can use it as a paperweight but it is still a brick. Frankly, I don't think it should be in there at all - it would be more straightforward to say that a simple RC circuit is an example of a one-pole filter. And if we are talking about an ideal integrator, that really does not have any useful filtering characteristics as it is going to attenuate everything above DC! SpinningSpark 02:20, 5 April 2010 (UTC)

So your analogy does not make any sense. A brick is rarely a paperweight. But an integrator is always a low pass filter. It is a very general concept that is not specific to electric circuits. It is also very common. That makes it the perfect example of a low-pass filter.
I still do not understand your issue with the current wording, but perhaps the following would be more to your taste? "An integrator has low-pass filter characteristics." --Drizzd (talk) 08:24, 5 April 2010 (UTC)
That's what I was trying to say in the edit you reverted. It really is the case that an integrator only acts as an LPF when it is a non-ideal integrator. An RC circuit is a filter, and also approximates to an integrator within a certain range. But an op-amp integrator circuit comes close to an ideal integrator which will have a 1/f frequency response. The pole is at f=0 and that will also be the knee frequency. That is, an ideal integrator is not a low-pass filter, it is a DC-pass filter! SpinningSpark 10:06, 5 April 2010 (UTC)
If you are still not convinced, try considering what an ideal integrator will do to a square pulse in the time domain compared to what you expect from an ideal (brick-wall) filter. SpinningSpark 10:12, 5 April 2010 (UTC)
Also, do you have a citation for this claim? There should be nothing going into Wikipedia that cannot be referenced to a reliable source. Material challenged or likely to be challenged must be attributed to a reliable, published source using an inline citation. SpinningSpark 10:19, 5 April 2010 (UTC)
"From the transfer function in Eq. (2.2) and our discussion of the frequency response of low-pass single-time-constant networks in Chapter 1 (see also Appendix E), it is easy to see that the Miller integrator will have the magnitude response shown in Fig. 2.11, which is identical to that of a low-pass network with zero corner frequency." Sedra & Smith, "Microelectronic Circuits", 3rd Edition, pg. 60.
Of course the ideal integrator is the limiting case of a 1st order LPF as the pole goes to zero and the DC gain goes to infinity. This is most easily seen by writing the transfer function for the inverting 1st order ideal op-amp LPF and taking the limit as R2 (the resistor in parallel with the C) goes to infinity. Alfred Centauri (talk) 13:18, 9 April 2010 (UTC)

I intend to soon remove the statement entirely if there is no response. SpinningSpark 06:20, 9 April 2010 (UTC)

I've added the cite above and specified that the integrator is an example of a single time constant LPF. In the above thread, you propose to compare an ideal integrator with an ideal LPF. Why? Alfred Centauri (talk) 14:33, 9 April 2010 (UTC)
The reason for the comparison is that the claim is that an integrator is an example of an LPF. I agree with everything you have said above technically, and I agree that there is a strong relationship between integrator and LPF. Where I take issue is with the simplistic statement that an integrator is an example of an LPF. As you point out, it is a limiting case, but it cannot sensibly actually be used as an LPF. There is a place for discussing this relationship in this article but it needs a better explanation than the current very misleading statement. This treatment by Maxim has a very good approach in my view. You will note that while they call an integrator a filter, they studiously avoid calling it a low-pass filter. The crucial difference is that an op-amp filter transfer function can be rescaled to unity gain in the passband, it can even be made unity gain by suitable choice of resistors. Attempting to do this with an integrator fails because the f=0 gain is infinite and everything else then gets rescaled to zero. Going back to the effect on a pulse, putting a pulse through a LPF results in a pulse coming out the other end (with some ringing on the edges); putting a pulse through an integrator loses the pulse altogether with just a change in dc level. SpinningSpark 17:12, 9 April 2010 (UTC)
I think you misunderstood. I asked why you proposed to compare an ideal integrator to an ideal LPF, i.e., a (non-causal) brick wall LPF. The integrator is the limiting case of a (causal) 1st order LPF. It's like you asked to compare the time domain response of a 1st order LPF to an ideal LPF.
If you apply a pulse to the input of a 1st order LPF, you will not observe ringing on the edges of the output
I think your point about rescaling the passband gain fails to take into account gain-bandwidth product. The passband of an ideal integrator has measure zero. The gain must be infinite at f=0 else the gain-bandwidth product would be zero. The point is that if you consider the family of 1st order LPFs with a given non-zero gain-bandwidth product, you'll find the integrator is a member.
Lastly, your point that the integrator cannot sensibly be used as a LPF is odd. Consider a 1st order LPF with the pole @ 1 yoctohertz, or a 1st order LPF with the pole at 1 yotahertz. I would think that neither of these could sensibly be used as LPFs and yet it is undeniable that they are indeed LPFs. Alfred Centauri (talk) 20:25, 9 April 2010 (UTC)
I don't disagree with you, and I don't think it will be productive for me to pursue an argument over which is the more meaningful comparison. My basic point is that putting this in a list of example LPFs with no explanation is highly misleading for a reader who does not already understand the principles. SpinningSpark 18:43, 10 April 2010 (UTC)
Don't you think this is much ado about nothing? Remove the statement if you truly believe that those that don't understand the principles (what principles?) will be "highly mislead" by the integrator example. Just be aware that it is certain someone else will come along an put it back in. It's inevitable.
My two cents: whenever I read a statement that I don't 'get' in a Wikipedia article, textbook, etc.; a statement that seems to contradict what I understand, I generally see it as an opportunity to broaden the scope of and perhaps correct my understanding. Someone may come along and, after reading that integrator example, make their own effort to understand why it is an (extreme) example of an LPF and thus make a connection they had not made before. Alfred Centauri (talk) 00:53, 11 April 2010 (UTC)

## 3dB/6dB per 8ve

I think that this edit is misguided, or perhaps the editor was just having a bad day. First order filters have an asymptotic roll-off of 6dB/8ve or 20dB/decade and is stated as a basic analysis topic in numerous textbooks. Each additional order adds an additional 6dB/8ve. I am inclined to revert this but would like to hear from Wytshymanski firt to understand the point he is trying to make. SpinningSpark 17:49, 22 March 2011 (UTC)

His revert was good, and his next edit had a mixtures of good corrections and new errors. Some careful cleanup is in order. Dicklyon (talk) 18:47, 22 March 2011 (UTC)
Actually, it looks like the only correction was removing a 3 dB that should have been 6; the rest was wrong, so I reverted and made the correction more explicitly, pointing out that a halving of amplitude is a factor of 4 in power, or 6 dB. Dicklyon (talk) 18:51, 22 March 2011 (UTC)
Are we describing power or amplitude? Half power is always 3 dB down. --Wtshymanski (talk) 18:59, 22 March 2011 (UTC)
Evidently the brain cell in charge of remembering this had died. My apologies. --Wtshymanski (talk) 19:06, 22 March 2011 (UTC)

## No LC networks?

Here's one in commons...

That's odd, since it's one of the most popular for (audio/speaker) crossover circuits. Also, I found articles mentioning 'k' filters and such so there should at least be some links to other types of (passive component) low-pass filters. — Preceding unsigned comment added by 71.196.246.113 (talk) 21:53, 18 August 2011 (UTC)

Here's another...
Good idea. Find in commons and Chebyshev filter and Butterworth filter among other places. Dicklyon (talk) 00:32, 19 August 2011 (UTC)
there are also inductive low pass filters also, strange they're not here also. Charlieb000 (talk) 21:07, 21 September 2012 (UTC)

## Discrete-time realization - How to derive the formula "(1-alpha)"?

How do you get from this:

${\displaystyle y_{i}=\overbrace {x_{i}\left({\frac {\Delta _{T}}{RC+\Delta _{T}}}\right)} ^{\text{Input contribution}}+\overbrace {y_{i-1}\left({\frac {RC}{RC+\Delta _{T}}}\right)} ^{\text{Inertia from previous output}}.}$

to this:

${\displaystyle y_{i}=\alpha x_{i}+(1-\alpha )y_{i-1}\qquad {\text{where}}\qquad \alpha \triangleq {\frac {\Delta _{T}}{RC+\Delta _{T}}}}$

??

I can't figure it out. --86.14.215.103 (talk) 23:21, 29 September 2012 (UTC)

It is simple algebra. Try working backwards, which has a more straightforward substitution. That is, substitute the expression for α into the second expression and the result should be the first expression. SpinningSpark 10:13, 30 September 2012 (UTC)
I tried working backwards but I still can't do it. I end up with two ${\displaystyle y_{i-1}}$ on the right hand side in the 2nd equation, but there is only one ${\displaystyle y_{i-1}}$ in the first equation. Can someone please show the steps to derive the "(1-α)" part. I'm also confused why it is -alpha instead of +alpha, when there are no negative terms in the first equation.--86.14.215.103 (talk) 13:45, 30 September 2012 (UTC)
We wish to show that,
dividing top and bottom of the LHS by RC,
From the definition of α, and dividing top and bottom by ΔT,
Rearranging,
${\displaystyle {\frac {RC}{\Delta _{T}}}={1 \over \alpha }-1={\frac {1-\alpha }{\alpha }}}$
or,
Template:NumBlk
Substituting Eq. 4 into Eq. 2,
${\displaystyle =1-\alpha }$
as required.
SpinningSpark 15:15, 30 September 2012 (UTC)

Ah that's great, thanks! I think I understand now. The key point is your Equation #3. There may be an easier way to derive (1-alpha) without working backwards:

Now from Equation 3: Template:NumBlk

${\displaystyle =x_{i}\left(\alpha \right)+y_{i-1}\left(1-\alpha \right)}$ as required.
Should this derivation be included in the article?

No. SpinningSpark 19:18, 30 September 2012 (UTC)

## High attention cross reference needed?

Why do we need a "See also" notice in the beginning? Shouldn't it just be in the "See also" section? --Mortense (talk) 14:43, 24 December 2013 (UTC)

Don't see any particularly good reason for that, I have reverted it. Not sure we need it as a see also at all, it is already in the navbox at the bottom and is linked from the lede of the article. SpinningSpark 18:58, 24 December 2013 (UTC)