Density altitude: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
No edit summary
 
Line 1: Line 1:
In [[coding theory]], '''block codes''' comprise the large and important family of [[Channel coding|error-correcting codes]] that encode data in blocks.
Oscar is what my spouse enjoys to call me and I completely dig that title. Bookkeeping is what I do. Minnesota has always been his home but his wife wants them to move. To gather coins is one of the issues I adore most.<br><br>my web site - meal delivery service - [http://Gotry.org/dietmealdelivery74619 Full Content] -
There is a vast number of examples for block codes, many of which have a wide range of practical applications. Block Codes are conceptually useful because they allow coding theorists, [[mathematics|mathematicians]], and [[computer science|computer scientists]] to study the limitations of ''all'' block codes in a unified way.
Such limitations often take the form of ''bounds'' that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.
 
Examples of block codes are [[Reed–Solomon code]]s, [[Hamming code]]s, [[Hadamard code]]s, [[Expander code]]s, [[Golay code]]s, and [[Reed–Muller code]]s. These examples also belong to the class of [[linear code]]s, and hence they are called '''linear block codes'''.
 
== The block code and its parameters ==
 
[[Error-correcting code]]s are used to [[reliability (computer networking)|reliably]] transmit [[digital data]] over unreliable [[communication channel]]s subject to [[channel noise]].
When a sender wants to transmit a possibly very long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such piece is called ''message'' and the procedure given by the block code encodes each message individually into a codeword, also called a ''block'' in the context of block codes. The sender then transmits all blocks to the receiver, who can in turn use some decoding mechanism to (hopefully) recover the original messages from the possibly corrupted received blocks.
The performance and success of the overall transmission depends on the parameters of the channel and the block code.
 
Formally, a block code is an [[injective]] mapping
:<math>C:\Sigma^k \to \Sigma^n</math>.
Here, <math>\Sigma</math> is a finite and nonempty [[set (mathematics)|set]] and <math>k</math> and <math>n</math> are integers. The meaning and significance of these three parameters and other parameters related to the code are described below.
 
=== The alphabet Σ ===
The data stream to be encoded is modeled as a [[string (computer science)|string]] over some '''alphabet''' <math>\Sigma</math>. The size <math>|\Sigma|</math> of the alphabet is often written as <math>q</math>. If <math>q=2</math>, then the block code is called a ''binary'' block code. In many applications it is useful to consider <math>q</math> to be a [[prime power]], and to identify <math>\Sigma</math> with the [[finite field]] <math>\mathbb F_q</math>.
 
=== The message length ''k'' ===
Messages are elements <math>m</math> of <math>\Sigma^k</math>, that is, strings of length <math>k</math>.
Hence the number <math>k</math> is called the '''message length''' or '''dimension''' of a block code.
 
=== The block length ''n'' ===
The '''block length''' <math>n</math> of a block code is the number of symbols in a block. Hence, the elements <math>c</math> of <math>\Sigma^n</math> are strings of length <math>n</math> and correspond to blocks that may be received by the receiver. Hence they are also called received words.
If <math>c=C(m)</math> for some message <math>m</math>, then <math>c</math> is called the codeword of <math>m</math>.
 
=== The rate ''R'' ===
The '''rate''' of a block code is defined as the ratio between its message length and its block length:
:<math>R=k/n</math>.
A large rate means that the amount of actual message per transmitted block is high. In this sense, the rate measures the transmission speed and the quantity <math>1-R</math> measures the overhead that occurs due to the encoding with the block code.
It is a simple [[information theory|information theoretical]] fact that the rate cannot exceed <math>1</math> since data cannot be compressed in general. Formally, this follows from the fact that the code <math>C</math> is an injective map.
 
=== The distance ''d'' ===
The '''distance''' or '''minimum distance''' <math>d</math> of a block code is the minimum number of positions in which any two distinct codewords differ, and the '''relative distance''' <math>\delta</math> is the fraction <math>d/n</math>.
Formally, for received words <math>c_1,c_2\in\Sigma^n</math>, let <math>\Delta(c_1,c_2)</math> denote the [[Hamming distance]] between <math>c_1</math> and <math>c_2</math>, that is, the number of positions in which <math>c_1</math> and <math>c_2</math> differ.
Then the minimum distance <math>d</math> of the code <math>C</math> is defined as
:<math>d := \min_{m_1,m_2\in\Sigma^k; m_1\neq m_2} \Delta[C(m_1),C(m_2)]</math>.
Since any code has to be injective, any two codewords will disagree in at least one position, so the distance of any code is at least <math>1</math>. Besides, the '''distance''' equals the '''[[minimum weight]]''' for linear block codes because:
:<math>\min_{m_1,m_2\in\Sigma^k; m_1\neq m_2} \Delta[C(m_1),C(m_2)] = \min_{m_1,m_2\in\Sigma^k; m_1\neq m_2} \Delta[\mathbf{0},C(m_1)+C(m_2)] = \min_{m\in\Sigma^k; m\neq\mathbf{0}} w[C(m)] = w_{min}</math>.
 
A larger distance allows for more error correction and detection.
For example, if we only consider errors that may change symbols of the sent codeword but never erase or add them, then the number of errors is the number of positions in which the sent codeword and the received word differ.
A code with distance <math>d</math> allows the receiver to detect up to <math>d-1</math> transmission errors since changing <math>d-1</math> positions of a codeword can never accidentally yield another codeword. Furthermore, if no more than <math>(d-1)/2</math> transmission errors occur, the receiver can uniquely decode the received word to a codeword. This is because every received word has at most one codeword at distance <math>(d-1)/2</math>. If more than <math>(d-1)/2</math> transmission errors occur, the receiver cannot uniquely decode the received word in general as there might be several possible codewords. One way for the receiver to cope with this situation is to use [[list-decoding]], in which the decoder outputs a list of all codewords in a certain radius.
 
=== Popular notation ===
The notation <math>(n,k,d)_q</math> is used as a shorthand for the fact that the block code under consideration is over an alphabet <math>\Sigma</math> of size <math>q</math>, has block length <math>n</math>, message length <math>k</math>, and distance <math>d</math>.
If the block code is a linear block code, then the square brackets in the notation <math>[n,k,d]_q</math> are used to represent that fact.
For binary codes with <math>q=2</math>, the index is sometimes dropped.
For [[maximum distance separable code]]s, the distance is always <math>d=n-k+1</math>, and sometimes the precise distance is not known, non-trivial to prove or state, or not needed. In such cases, the <math>d</math>-component may be missing.
 
Sometimes, especially for non-block codes, the notation <math>(n,M,d)_q</math> is used for codes that contain <math>M</math> codewords of length <math>n</math>. For block codes with messages of length <math>k</math> over an alphabet of size <math>q</math>, this number would be <math>M=q^k</math>.
 
== Examples ==
 
As mentioned above, there are a vast number of error-correcting codes that are actually block codes.
The first error-correcting code was the [[Hamming(7,4)|Hamming(7,4)-code]], developed by [[Richard W. Hamming]] in 1950. This code transforms a message consisting of 4 bits into a codeword of 7 bits by adding 3 parity bits. Hence this code is a block code. It turns out that it is also a linear code and that it has distance 3. In the shorthand notation above, this means that the Hamming(7,4)-code is a <math>[7,4,3]_2</math>-code.
 
[[Reed–Solomon code]]s are a family of <math>[n,k,d]_q</math>-codes with <math>d=n-k+1</math> and <math>q</math> being a [[prime power]]. [[Rank error-correcting code|Rank codes]] are family of <math>[n,k,d]_q</math>-codes with <math>d \leq n-k+1</math>. [[Hadamard code]]s are a family of <math>[n,k,d]_2</math>-codes with <math>n=2^{k-1}</math> and <math>d=2^{k-2}</math>.
 
== Error detection and correction properties ==
 
A codeword <math>c \in \Sigma^n</math>could be considered as a point in the <math>n</math>-dimension space <math>\Sigma^n</math> and the code <math>\mathcal{C}</math> is the subset of <math>\Sigma^n</math>. A code <math>\mathcal{C}</math> has distance <math>d</math> means that <math>\forall c\in \mathcal{C}</math>, there is no other codeword in the ''Hamming ball'' centered at <math>c</math> with radius <math>d-1</math>, which is defined as the collection of <math>n</math>-dimension words whose ''[[Hamming distance]]'' to <math>c</math> is no more than <math>d-1</math>. Similarly, <math> \mathcal{C}</math> with (minimum) distance <math>d</math> has the following properties:
* <math> \mathcal{C}</math> can detect <math>d-1</math> errors : Because a codeword <math>c</math> is the only codeword in the Hamming ball centered at itself with radius <math>d-1</math>, no error pattern of <math>d-1</math> or fewer errors could change one codeword to another. When the receiver detects that the received vector is not a codeword of <math> \mathcal{C}</math>, the errors are detected (but no guarantee to correct).
* <math> \mathcal{C}</math> can correct <math>\textstyle\left\lfloor {{d-1} \over 2}\right\rfloor</math> errors. Because a codeword <math>c</math> is the only codeword in the Hamming ball centered at itself with radius <math>d-1</math>, the two Hamming balls centered at two different codewords respectively with both radius <math>\textstyle\left \lfloor {{d-1} \over 2}\right \rfloor</math> do not overlap with each other. Therefore, if we consider the error correction as finding the codeword closest to the received word <math>y</math>, as long as the number of errors is no more than <math>\textstyle\left \lfloor {{d-1} \over 2}\right \rfloor</math>, there is only one codeword in the hamming ball centered at <math>y</math> with radius <math>\textstyle\left \lfloor {{d-1} \over 2}\right \rfloor</math>, therefore all errors could be corrected.
* In order to decode in the presence of more than <math>(d-1)/2</math> errors, [[list-decoding]] or [[Decoding methods#Maximum likelihood decoding|maximum likelihood decoding]] can be used.
* <math> \mathcal{C}</math> can correct <math>d-1</math> [[Binary erasure channel|erasures]]. By ''erasure'' it means that the position of the erased symbol is known. Correcting could be achieved by <math>q</math>-passing decoding : In <math>i^{th}</math> passing the erased position is filled with the <math>i^{th}</math> symbol and error correcting is carried out. There must be one passing that the number of errors is no more than <math>\textstyle\left \lfloor {{d-1} \over 2}\right \rfloor</math> and therefore the erasures could be corrected.
 
== Lower and upper bounds of block codes ==
 
=== Family of codes ===
 
<math>C =\{C_i\}_{i\ge1}</math> is called '' family of codes'', where <math>C_i</math> is an <math>(n_i,k_i,d_i)_q</math> code with monotonic increasing <math>n_i</math>.
 
'''Rate''' of family of codes <math>C</math> is defined as <math>R(C)=\lim_{i\to\infty}{k_i \over n_i}</math>
 
'''Relative distance''' of family of codes <math>C</math> is defined as <math>\delta(C)=\lim_{i\to\infty}{d_i \over n_i}</math>
 
To explore the relationship between <math>R(C)</math> and <math>\delta(C)</math>, a set of lower and upper bounds of block codes are known.
 
=== [[Hamming bound]] ===
: <math> R \le 1- {1 \over n} \cdot \log_{q} \cdot \left[\sum_{i=0}^{\lfloor {{\delta \cdot n-1}\over 2}\rfloor}\binom{n}{i}(q-1)^i\right]</math>
 
=== [[Singleton bound]] ===
The Singleton bound is that the sum of the rate and the relative distance of a block code cannot be much larger than 1:
:<math> R + \delta \le  1+\frac{1}{n}</math>.
In other words, every block code satisfies the inequality <math>k+d \le n+1 </math>.
[[Reed–Solomon code]]s are non-trivial examples of codes that satisfy the singleton bound with equality.
 
===[[Plotkin bound]]===
For <math>q=2</math>, <math>R+2\delta\le1</math>
 
For the general case, the following Plotkin bounds holds for any <math>C \subseteq \mathbb{F}_q^{n} </math> with distance <math>d</math>:
 
1. If <math>d=(1-{1 \over q})n, |C| \le 2qn </math>
 
2. If <math>d</math> > <math> (1-{1 \over q})n, |C| \le {qd \over {qd -(q-1)n}} </math>
 
For any <math>q</math>-ary code with distance <math>\delta</math>, <math>R \le 1- ({q \over {q-1}}) \delta + o(1)</math>
 
===[[Gilbert-Varshamov bound|Gilbert–Varshamov bound]]===
<math>R\ge1-H_q(\delta)-\epsilon</math>, where <math>0 \le \delta \le 1-{1\over q}, 0\le \epsilon \le 1- H_q(\delta)</math>,
<math> H_q(x)\equiv_{def} -x\cdot\log_q{x \over {q-1}}-(1-x)\cdot\log_q{(1-x)} </math> is the <math>q</math>-ary entropy function.
 
=== [[Johnson bound]] ===
Define <math>J_q(\delta) \equiv_{def} (1-{1\over q})(1-\sqrt{1-{q \delta \over{q-1}}}) </math>. <br />
Let <math>J_q(n, d, e)</math> be the maximum number of codewords in a Hamming ball of radius <math>e</math> for any code <math>C \subseteq \mathbb{F}_q^n</math> of distance <math>d</math>.
 
Then we have the ''Johnson Bound'' : <math>J_q(n,d,e)\le qnd</math>, if <math>{e \over n} \le {{q-1}\over q}\left( {1-\sqrt{1-{q \over{q-1}}\cdot{d \over n}}}\, \right)=J_q({d \over n})</math>
 
=== [[Elias Bassalygo bound|Elias–Bassalygo bound]] ===
 
: <math>R={\log_q{|C|} \over n} \le 1-H_q(J_q(\delta))+o(1) </math>
 
== Sphere packings and lattices ==
 
Block codes are tied to the [[sphere packing problem]] which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful [[Binary Golay code|Golay code]] used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is), the dimensions refer to the length of the codeword as defined above.
 
The theory of coding uses the ''N''-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called perfect codes. There are very few of these codes.
 
Another property is the number of neighbors a single codeword may have.<ref name=schlegel>
{{cite book
| title = Trellis and turbo coding
| author = Christian Schlegel and Lance Pérez
| publisher = Wiley-IEEE
| year = 2004
| isbn = 978-0-471-22755-7
| page = 73
| url = http://books.google.com/books?id=9wRCjfGAaEcC&pg=PA73
}}</ref>
Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. Respectively, in three and four dimensions, the maximum packing is given by the [[12-face]] and [[24-cell]] with 12 and 24 neighbors, respectively. When we increase the dimensions, the number of near neighbors increases very rapidly. In general, the value is given by the [[kissing number]]s.
 
The result is that the number of ways for noise to make the receiver choose
a neighbor (hence an error) grows as well. This is a fundamental limitation
of block codes, and indeed all codes. It may be harder to cause an error to
a single neighbor, but the number of neighbors can be large enough so the
total error probability actually suffers.<ref name=schlegel/>
 
==See also==
* [[Channel Capacity]]
* [[Shannon–Hartley theorem]]
* [[Noisy channel]]
* [[List decoding]]
* [[Sphere packing]]
 
== Notes ==
* Atri Rudra, CSE545 Error Correcting Codes: Combinatorics, Algorithms and Applications, State University of New York at Buffalo.
* P Vijay Kumar, Error Correcting Codes, Available on-line,  [http://www.nptel.iitm.ac.in/courses/117108044/ Video lectures], [http://www.nptel.iitm.ac.in/courses/117108044/module1/Lecture_Notes.pdf Lecture notes]
 
== References ==
 
{{reflist}}
 
{{Refimprove|date=September 2008}}
 
* {{cite book | author=J.H. van Lint | authorlink=Jack van Lint | title=Introduction to Coding Theory | edition=2nd ed | publisher=Springer-Verlag | series=[[Graduate Texts in Mathematics|GTM]] | volume=86 | year=1992 | isbn=3-540-54894-7 | page=31}}
* {{cite book | author=F.J. MacWilliams | authorlink=Jessie MacWilliams | coauthors=[[Neil Sloane|N.J.A. Sloane]] | title=The Theory of Error-Correcting Codes | publisher=North-Holland | year=1977 | isbn=0-444-85193-3 | page=35}}
* {{cite book | author=W. Huffman |coauthors=V.Pless | title= Fundamentals of error-correcting codes | publisher=Cambridge University Press | year=2003 | isbn=978-0-521-78280-7}}
 
* {{cite book | author=S. Lin |coauthors=D. J. Jr. Costello | title= Error Control Coding: Fundamentals and Applications | publisher=Prentice-Hall | year=1983 | isbn=0-13-283796-X}}
 
==External links==
*http://www.cse.buffalo.edu/~atri/courses/coding-theory/
 
== External links ==
* [http://complextoreal.com/wp-content/uploads/2013/01/block.pdf Coding Concepts and Block Coding ]
 
[[Category:Coding theory]]

Latest revision as of 07:33, 5 October 2014

Oscar is what my spouse enjoys to call me and I completely dig that title. Bookkeeping is what I do. Minnesota has always been his home but his wife wants them to move. To gather coins is one of the issues I adore most.

my web site - meal delivery service - Full Content -