Main Page: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
In [[mathematics]], the '''rearrangement inequality'''<ref>{{Citation | last1 = Hardy | first1 = G.H. | authorlink =  G. H. Hardy | last2 = Littlewood | first2 = J.E. | author2-link = John Edensor Littlewood | last3 = Pólya | first3 = G. | author3-link = George Pólya | title = Inequalities | publisher = [[Cambridge University Press]] | series = Cambridge Mathematical Library | edition = 2. | year = 1952 | location = [[Cambridge]] | isbn = 0-521-05206-8 | mr = 0046395 | zbl = 0047.05302}}, Section&nbsp;10.2, Theorem&nbsp;368</ref> states that
{{Cleanup|reason=this article need neutral and better phrasing|date=November 2012}}
{{Modulation techniques}}
'''Amplitude-shift keying''' ('''ASK''') is a form of [[amplitude modulation]] that represents [[Digital data|digital]] [[data]] as variations in the [[amplitude]] of a [[carrier wave]].
In an ASK system, the binary symbol 1 is represented by transmitting a fixed-amplitude carrier wave and fixed frequency for a bit duration of T seconds. If the signal value is 1 then the carrier signal will be transmitted; otherwise, a signal value of 0 will be transmitted.


:<math>x_ny_1 + \cdots + x_1y_n
Any digital modulation scheme uses a [[wiktionary:finite|finite]] number of distinct signals to represent digital data. ASK uses a finite number of amplitudes, each assigned a unique pattern of [[bit|binary digit]]s. Usually, each amplitude encodes an equal number of bits. Each pattern of bits forms the [[Symbol (data)|symbol]] that is represented by the particular amplitude. The [[demodulator]], which is designed specifically for the symbol-set used by the modulator, determines the amplitude of the received signal and maps it back to the symbol it represents, thus recovering the original data. [[Frequency]] and [[Phase (waves)|phase]] of the carrier are kept constant.
\le x_{\sigma (1)}y_1 + \cdots + x_{\sigma (n)}y_n
\le x_1y_1 + \cdots + x_ny_n</math>


for every choice of [[real number]]s
Like [[Amplitude modulation|AM]], ASK is also linear and sensitive to atmospheric noise, distortions, propagation conditions on different routes in [[PSTN]], etc.  Both ASK modulation and demodulation processes are relatively inexpensive. The ASK technique is also commonly used to transmit [[digital data]] over optical fiber. For LED transmitters, binary 1 is represented by a short pulse of light and binary 0 by the absence of light. Laser transmitters normally have a fixed "bias" current that causes the device to emit a low light level. This low level represents binary 0, while a higher-amplitude lightwave represents binary 1.


:<math>x_1\le\cdots\le x_n\quad\text{and}\quad y_1\le\cdots\le y_n</math>
The simplest and most common form of ASK operates as a switch, using the presence of a carrier wave to indicate a binary one and its absence to indicate a binary zero. This type of modulation is called [[on-off keying]] (OOK), and is used at radio frequencies to transmit Morse code (referred to as continuous wave operation),


and every [[permutation]]
More sophisticated encoding schemes have been developed which represent data in groups using additional amplitude levels. For instance, a four-level encoding scheme can represent two bits with each shift in amplitude; an eight-level scheme can represent three bits; and so on. These forms of amplitude-shift keying require a high signal-to-noise ratio for their recovery, as by their nature much of the signal is transmitted at reduced power.


:<math>x_{\sigma(1)},\dots,x_{\sigma(n)}\,</math>
[[File:Ask ideal diagram.png|center|600px|thumbnail|ASK diagram]]


of ''x''<sub>1</sub>, .&nbsp;.&nbsp;., ''x<sub>n</sub>''. If the numbers are different, meaning that
ASK system can be divided into three blocks. The first one represents the transmitter, the second one is a linear model of the effects of the channel, the third one shows the structure of the receiver. The following notation is used:


:<math>x_1<\cdots<x_n\quad\text{and}\quad y_1<\cdots<y_n,</math>
*''ht''<sub>(f)</sub> is the carrier signal for the transmission
*''hc''<sub>(f)</sub> is the impulse response of the channel
*''n''<sub>(t)</sub> is the noise introduced by the channel
*''hr''<sub>(f)</sub> is the filter at the receiver
*''L'' is the number of levels that are used for transmission
*''T''<sub>s</sub> is the time between the generation of two symbols


then the lower bound is attained only for the permutation which reverses the order, i.e. σ(''i'')&nbsp;= ''n''&nbsp;&minus;&nbsp;''i''&nbsp;+&nbsp;1 for all ''i''&nbsp;= 1,&nbsp;...,&nbsp;''n'', and the upper bound is attained only for the identity, i.e. σ(''i'')&nbsp;=&nbsp;''i'' for all ''i''&nbsp;= 1,&nbsp;...,&nbsp;''n''.
Different symbols are represented with different voltages. If the maximum allowed value for the voltage is A, then all the possible values are in the range [−A, A] and they are given by:


Note that the rearrangement inequality makes no assumptions on the signs of the real numbers.
<math>v_i = \frac{2 A}{L-1} i - A; \quad i = 0,1,\dots, L-1</math>


==Applications==
the difference between one voltage and the other is:


Many famous inequalities can be proved by the rearrangement inequality, such as the [[inequality of arithmetic and geometric means|arithmetic mean – geometric mean inequality]], the [[Cauchy–Schwarz inequality]], and [[Chebyshev's sum inequality]].
<math>\Delta = \frac{2 A}{L - 1} </math>


==Proof==
Considering the picture, the symbols v[n] are generated randomly by the source S, then the impulse generator creates impulses with an area of v[n]. These impulses are sent to the filter ht to be sent through the channel. In other words, for each symbol a different carrier wave is sent with the relative amplitude.


The lower bound follows by applying the upper bound to
Out of the transmitter, the signal s(t) can be expressed in the form:


:<math>-x_n\le\cdots\le-x_1.</math>
<math>s (t) = \sum_{n = -\infty}^\infty v[n] \cdot h_t (t - n T_s)</math>


Therefore, it suffices to prove the upper bound. Since there are only finitely many permutations, there exists at least one for which
In the receiver, after the filtering through hr (t) the signal is:


:<math>x_{\sigma (1)}y_1 + \cdots + x_{\sigma (n)}y_n</math>
<math>z(t) = n_r (t) + \sum_{n = -\infty}^{\infty} v[n] \cdot g (t - n T_s)</math>


is maximal. In case there are several permutations with this property, let σ denote one with the highest number of [[fixed point (mathematics)|fixed points]].
where we use the notation:


We will now [[reductio ad absurdum|prove by contradiction]], that σ has to be the identity (then we are done). Assume that σ is NOT the identity. Then there exists a ''j'' in {1,&nbsp;...,&nbsp;''n''&nbsp;&minus;&nbsp;1} such that σ(''j'')&nbsp;≠&nbsp;''j'' and σ(''i'')&nbsp;=&nbsp;''i'' for all ''i'' in {1,&nbsp;...,&nbsp;''j''&nbsp;&minus;&nbsp;1}. Hence σ(''j'')&nbsp;>&nbsp;''j'' and there exists a ''k'' in {''j''&nbsp;+&nbsp;1,&nbsp;...,&nbsp;''n''} with σ(''k'')&nbsp;=&nbsp;''j''. Now
<math>n_r (t) = n(t) * h_r (f)</math>


:<math>j<k\Rightarrow y_j\le y_k
<math>g(t) = h_t (t) * h_c (f) * h_r (t)</math>
\qquad\text{and}\qquad
j<\sigma(j)\Rightarrow x_j\le x_{\sigma(j)}.\quad(1)</math>
Therefore,


:<math>0\le(x_{\sigma(j)}-x_j)(y_k-y_j). \quad(2)</math>
where * indicates the convolution between two signals. After the A/D conversion the signal z[k] can be expressed in the form:


Expanding this product and rearranging gives
<math>z[k] = n_r [k] + v[k] g[0] + \sum_{n \neq k} v[n] g[k-n]</math>


:<math>x_{\sigma(j)}y_j+x_jy_k\le x_jy_j+x_{\sigma(j)}y_k\,, \quad(3)</math>
In this relationship, the second term represents the symbol to be extracted. The others are unwanted: the first one is the effect of noise, the third one is due to the intersymbol interference.


hence the permutation
If the filters are chosen so that g(t) will satisfy the Nyquist ISI criterion, then there will be no intersymbol interference and the value of the sum will be zero, so:


:<math>\tau(i):=\begin{cases}i&\text{for }i\in\{1,\ldots,j\},\\
<math>z[k] = n_r [k] + v[k] g[0]</math>
\sigma(j)&\text{for }i=k,\\
\sigma(i)&\text{for }i\in\{j+1,\ldots,n\}\setminus\{k\},\end{cases}</math>


which arises from σ by exchanging the values σ(''j'') and σ(''k''), has at least one additional fixed point compared to σ, namely at ''j'', and also attains the maximum. This contradicts the choice of σ.
the transmission will be affected only by noise.


If
== Probability of error ==


:<math>x_1<\cdots<x_n\quad\text{and}\quad y_1<\cdots<y_n,</math>
The probability density function of having an error of a given size can be modelled by a Gaussian function; the mean value will be the relative sent value, and its variance will be given by:


then we have strict inequalities at (1), (2), and (3), hence the maximum can only be attained by the identity, any other permutation σ cannot be optimal.
<math>\sigma_N^2 = \int_{-\infty}^{+\infty} \Phi_N (f) \cdot |H_r (f)|^2 df</math>
 
where <math>\Phi_N (f)</math> is the spectral density of the noise within the band and Hr (f) is the continuous Fourier transform of the impulse response of the filter hr (f).
 
The probability of making an error is given by:
 
<math>P_e = P_{e|H_0} \cdot P_{H_0} + P_{e|H_1} \cdot P_{H_1} + \cdots + P_{e|H_{L-1}} \cdot P_{H_{L-1}}</math>
 
where, for example, <math>P_{e|H_0}</math> is the conditional probability of making an error given that a symbol v0 has been sent and <math>P_{H_0}</math> is the probability of sending a symbol v0.
 
If the probability of sending any symbol is the same, then:
 
<math>P_{H_i} = \frac{1}{L}</math>
 
If we represent all the probability density functions on the same plot against the possible value of the voltage to be transmitted, we get a picture like this (the particular case of L = 4 is shown):
 
[[File:Ask dia calc prob.png|center|800px]]
 
The probability of making an error after a single symbol has been sent is the area of the Gaussian function falling under the functions for the other symbols. It is shown in cyan for just one of them. If we call P+ the area under one side of the Gaussian, the sum of all the areas will be: 2 L P^+ - 2 P^+. The total probability of making an error can be expressed in the form:
 
<math>P_e = 2 \left( 1 - \frac{1}{L} \right) P^+</math>
 
We have now to calculate the value of P+. In order to do that, we can move the origin of the reference wherever we want: the area below the function will not change. We are in a situation like the one shown in the following picture:
 
[[File:Ask dia calc prob 2.png|center|800px]]
 
it does not matter which Gaussian function we are considering, the area we want to calculate will be the same. The value we are looking for will be given by the following integral:
 
<math>P^+ = \int_{\frac{A g(0)}{L-1}}^{\infty} \frac{1}{\sqrt{2 \pi} \sigma_N} e^{-\frac{x^2}{2 \sigma_N^2}} d x = \frac{1}{2} \operatorname{erfc} \left( \frac{A g(0)}{\sqrt{2} (L-1) \sigma_N} \right) </math>
 
where erfc() is the complementary error function. Putting all these results together, the probability to make an error is:
 
<math>P_e = \left( 1 - \frac{1}{L} \right) \operatorname{erfc} \left( \frac{A g(0)}{\sqrt{2} (L-1) \sigma_N} \right) </math>
 
from this formula we can easily understand that the probability to make an error decreases if the maximum amplitude of the transmitted signal or the amplification of the system becomes greater; on the other hand, it increases if the number of levels or the power of noise becomes greater.
 
This relationship is valid when there is no intersymbol interference, i.e. g(t) is a Nyquist function.


==See also==
==See also==
* [[Hardy–Littlewood inequality]]
* [[Frequency-shift keying]] (FSK)
* [[Chebyshev's sum inequality]]
 
==References==


<references/>
==External links==
*[http://www.maxim-ic.com/appnotes.cfm/an_pk/2815/CMP/WP-21 Calculating the Sensitivity of an Amplitude Shift Keying (ASK) Receiver]


[[Category:Inequalities]]
{{DEFAULTSORT:Amplitude-Shift Keying}}
[[Category:Articles containing proofs]]
[[Category:Quantized radio modulation modes]]
[[Category:Applied probability]]
[[Category:Fault tolerance]]

Revision as of 01:45, 13 August 2014

Template:Cleanup Template:Modulation techniques Amplitude-shift keying (ASK) is a form of amplitude modulation that represents digital data as variations in the amplitude of a carrier wave. In an ASK system, the binary symbol 1 is represented by transmitting a fixed-amplitude carrier wave and fixed frequency for a bit duration of T seconds. If the signal value is 1 then the carrier signal will be transmitted; otherwise, a signal value of 0 will be transmitted.

Any digital modulation scheme uses a finite number of distinct signals to represent digital data. ASK uses a finite number of amplitudes, each assigned a unique pattern of binary digits. Usually, each amplitude encodes an equal number of bits. Each pattern of bits forms the symbol that is represented by the particular amplitude. The demodulator, which is designed specifically for the symbol-set used by the modulator, determines the amplitude of the received signal and maps it back to the symbol it represents, thus recovering the original data. Frequency and phase of the carrier are kept constant.

Like AM, ASK is also linear and sensitive to atmospheric noise, distortions, propagation conditions on different routes in PSTN, etc. Both ASK modulation and demodulation processes are relatively inexpensive. The ASK technique is also commonly used to transmit digital data over optical fiber. For LED transmitters, binary 1 is represented by a short pulse of light and binary 0 by the absence of light. Laser transmitters normally have a fixed "bias" current that causes the device to emit a low light level. This low level represents binary 0, while a higher-amplitude lightwave represents binary 1.

The simplest and most common form of ASK operates as a switch, using the presence of a carrier wave to indicate a binary one and its absence to indicate a binary zero. This type of modulation is called on-off keying (OOK), and is used at radio frequencies to transmit Morse code (referred to as continuous wave operation),

More sophisticated encoding schemes have been developed which represent data in groups using additional amplitude levels. For instance, a four-level encoding scheme can represent two bits with each shift in amplitude; an eight-level scheme can represent three bits; and so on. These forms of amplitude-shift keying require a high signal-to-noise ratio for their recovery, as by their nature much of the signal is transmitted at reduced power.

ASK diagram

ASK system can be divided into three blocks. The first one represents the transmitter, the second one is a linear model of the effects of the channel, the third one shows the structure of the receiver. The following notation is used:

  • ht(f) is the carrier signal for the transmission
  • hc(f) is the impulse response of the channel
  • n(t) is the noise introduced by the channel
  • hr(f) is the filter at the receiver
  • L is the number of levels that are used for transmission
  • Ts is the time between the generation of two symbols

Different symbols are represented with different voltages. If the maximum allowed value for the voltage is A, then all the possible values are in the range [−A, A] and they are given by:

the difference between one voltage and the other is:

Considering the picture, the symbols v[n] are generated randomly by the source S, then the impulse generator creates impulses with an area of v[n]. These impulses are sent to the filter ht to be sent through the channel. In other words, for each symbol a different carrier wave is sent with the relative amplitude.

Out of the transmitter, the signal s(t) can be expressed in the form:

In the receiver, after the filtering through hr (t) the signal is:

where we use the notation:

where * indicates the convolution between two signals. After the A/D conversion the signal z[k] can be expressed in the form:

In this relationship, the second term represents the symbol to be extracted. The others are unwanted: the first one is the effect of noise, the third one is due to the intersymbol interference.

If the filters are chosen so that g(t) will satisfy the Nyquist ISI criterion, then there will be no intersymbol interference and the value of the sum will be zero, so:

the transmission will be affected only by noise.

Probability of error

The probability density function of having an error of a given size can be modelled by a Gaussian function; the mean value will be the relative sent value, and its variance will be given by:

where is the spectral density of the noise within the band and Hr (f) is the continuous Fourier transform of the impulse response of the filter hr (f).

The probability of making an error is given by:

where, for example, is the conditional probability of making an error given that a symbol v0 has been sent and is the probability of sending a symbol v0.

If the probability of sending any symbol is the same, then:

If we represent all the probability density functions on the same plot against the possible value of the voltage to be transmitted, we get a picture like this (the particular case of L = 4 is shown):

The probability of making an error after a single symbol has been sent is the area of the Gaussian function falling under the functions for the other symbols. It is shown in cyan for just one of them. If we call P+ the area under one side of the Gaussian, the sum of all the areas will be: 2 L P^+ - 2 P^+. The total probability of making an error can be expressed in the form:

We have now to calculate the value of P+. In order to do that, we can move the origin of the reference wherever we want: the area below the function will not change. We are in a situation like the one shown in the following picture:

it does not matter which Gaussian function we are considering, the area we want to calculate will be the same. The value we are looking for will be given by the following integral:

where erfc() is the complementary error function. Putting all these results together, the probability to make an error is:

from this formula we can easily understand that the probability to make an error decreases if the maximum amplitude of the transmitted signal or the amplification of the system becomes greater; on the other hand, it increases if the number of levels or the power of noise becomes greater.

This relationship is valid when there is no intersymbol interference, i.e. g(t) is a Nyquist function.

See also

External links