Control reconfiguration: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Yobot
m clean up, References after punctuation per WP:REFPUNC and WP:PAIC using AWB (8434)
 
Line 1: Line 1:
{{infobox code
I'm Luther but I by no means truly liked that title. Kansas is where her house is but she requirements to transfer because of her family. Bookkeeping is what he does. One of his preferred hobbies is playing crochet but he hasn't produced a dime with it.<br><br>Here is my page ... car warranty; [http://odw.my-hobbys.com/index.php?mod=users&action=view&id=966 more helpful hints],
| name          = Hadamard code
| image          =
| image_caption  =
| namesake      = [[Jacques Hadamard]]
| type          = [[Linear code|Linear]] [[block code]]
| block_length  = <math>n=2^k</math>
| message_length = <math>k</math>
| rate          = <math>k/2^k</math>
| distance      = <math>d=2^{k-1}</math>
| alphabet_size  = <math>2</math>
| notation      = <math>[2^{k},k,2^{k-1}]_2</math>-code
}}
{{infobox code
| name          = Punctured Hadamard code
| image          =
| image_caption  =
| namesake      = [[Jacques Hadamard]]
| type          = [[Linear code|Linear]] [[block code]]
| block_length  = <math>n=2^{k-1}</math>
| message_length = <math>k</math>
| rate          = <math>k/2^{k-1}</math>
| distance      = <math>d=2^{k-2}</math>
| alphabet_size  = <math>2</math>
| notation      = <math>[2^{k-1},k,2^{k-2}]_2</math>-code
}}
[[File:Hadamard-Code.svg|thumb|right|250 px|Matrix of the Hadamard code (32, 6, 16) for the [[Reed-Muller Code]] (1, 5) of the NASA space probe [[Mariner 9]]]]
[[File:Multigrade operator XOR.svg|thumb|250px|[[Exclusive or|XOR]] operations<br>Here the white fields stand for 0<br>and the red fields for 1]]
 
The '''Hadamard code''' is an [[error-correcting code]] that is used for [[error detection and correction]] when transmitting messages over very noisy or unreliable channels.
A famous application of the Hadamard code was the NASA space probe [[Mariner 9]] in 1971, where the code was used to transmit photos of Mars back to Earth.
Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in [[coding theory]], [[mathematics]], and [[theoretical computer science]].
The Hadamard code is named after the French mathematician [[Jacques Hadamard]] and also known under the names '''Walsh code''' or '''Walsh family'''<ref>See, e.g., {{Harvtxt|Amadei|Manzoli|Merani|2002}}</ref> and '''Walsh–Hadamard code''',<ref>See, e.g., Section 19.2.2 of {{Harvtxt|Arora|Barak|2009}}.</ref> in recognition of the American mathematician [[Joseph Leonard Walsh]].
 
The Hadamard code is an example of a [[linear code]] over a [[binary alphabet]] that maps messages of length <math>k</math> to codewords of length <math>2^k</math>.
It is unique in that each non-zero codeword has a [[Hamming weight]] of ''exactly'' <math>2^k/2</math>, which implies that the [[Block_code#The_distance_d|distance]] of the code is also <math>2^k/2</math>.
In standard [[Block_code#Popular_notation|coding theory notation]] for [[block code]]s, the Hadamard code is a <math>[2^k,k,2^k/2]_2</math>-code, that is, it is a [[linear code]] over a [[binary set|binary alphabet]], has [[Block_code#The_block_length_n|block length]] <math>2^k</math>, [[Block_code#The_message_length_k|message length]] (or dimension) <math>k</math>, and [[Block_code#The_distance_d|minimum distance]] <math>2^k/2</math>.
The block length is very large compared to the message length, but on the other hand, errors can be corrected even in extremely noisy conditions.
The '''punctured Hadamard code''' is a ''slightly'' improved version of the Hadamard code; it is a <math>[2^{k-1},k,2^{k-2}]_2</math>-code and thus has a slightly better [[Block_code#The_rate_R|rate]] while maintaining the relative distance of <math>1/2</math>, and is thus preferred in practical applications.
The Hadamard code is the same as the first order [[Reed–Muller code]] over the binary alphabet.<ref>See, e.g., page 3 of {{harvtxt|Guruswami|2009}}.</ref>
 
Normally, Hadamard codes are based on [[Hadamard matrix#Sylvester's construction|Sylvester's construction of Hadamard matrices]], but the term “Hadamard code” is also used to refer to codes constructed from arbitrary [[Hadamard matrix|Hadamard matrices]], which are not necessarily of Sylvester type.
In general, such a code is not linear.
Such codes were first constructed by [[R. C. Bose]] and [[Sharadchandra Shankar Shrikhande|S. S. Shrikhande]] in 1959.<ref>{{cite journal
| last1 = Bose | first1 = R.C.
| last2 = Shrikhande | first2 = S.S.
| title =A note on a result in the theory of code construction
| journal = Information and Control
| volume = 2
| issue = 2
| year = 1959
| pages = 183–194
| doi = 10.1016/S0019-9958(59)90376-6
| id = {{citeseerx|10.1.1.154.2879}}
}}</ref>
If ''n'' is the size of the Hadamard matrix, the code has parameters <math>(n,2n,n/2)_2</math>, meaning it is a not-necessarily-linear binary code with 2''n'' codewords of block length ''n'' and minimal distance ''n''/2.  The construction and decoding scheme described below apply for general ''n'', but the property of linearity and the identification with Reed–Muller codes require that ''n'' be a power of 2 and that the Hadamard matrix be equivalent to the matrix constructed by Sylvester's method.
 
The Hadamard code is a [[locally decodable]] code, which provides a way to recover parts of the original message with high probability, while only looking at a small fraction of the received word. This gives rise to applications in [[computational complexity theory]] and  particularly in the design of [[probabilistically checkable proofs]].
Since the relative distance of the Hadamard code is 1/2, normally one can only hope to recover from at most a 1/4 fraction of error. Using [[list decoding]], however, it is possible to compute a short list of possible candidate messages as long as less than <math>\frac{1}{2}-\epsilon</math> of the bits in the received word have been corrupted.
 
In [[code division multiple access]] (CDMA) communication, the Hadamard code is referred to as Walsh Code, and is used to define individual [[telecommunications|communication]] [[channel (communications)|channels]].  It is usual in the CDMA literature to refer to codewords as “codes”.  Each user will use a different codeword, or “code”, to modulate their signal.  Because Walsh codewords are mathematically [[orthogonal]], a Walsh-encoded signal appears as [[random noise]] to a CDMA capable mobile [[terminal (telecommunication)|terminal]], unless that terminal uses the same codeword as the one used to encode the incoming [[signal (information theory)|signal]].<ref>{{cite web|url=http://www.complextoreal.com/CDMA.pdf|title=CDMA Tutorial: Intuitive Guide to Principles of Communications|publisher=Complex to Real|accessdate=4 August 2011}}</ref>
 
==History==
''Hadamard code'' is the name that is most commonly used for this code in the literature.
[[Jacques Hadamard]] did not invent the code himself, but he defined [[Hadamard matrix|Hadamard matrices]] around 1893, long before the first [[error-correcting code]], the [[Hamming code]], was developed in the 1940s.
The Hadamard code is based on Hadamard matrices, and while there are many different Hadamard matrices that could be used here, normally only [[Hadamard matrix#Sylvester's construction|Sylvester's construction of Hadamard matrices]] is used to obtain the codewords of the Hadamard code.
[[James Joseph Sylvester]] developed his construction of Hadamard matrices in 1867, which actually predates Hadamard's work on Hadamard matrices.
Hence the name ''Hadamard code'' is not undisputed and sometimes the code is called ''Walsh code'', honoring the American mathematician [[Joseph Leonard Walsh]].
 
A Hadamard code was used during the 1971 [[Mariner 9]] mission to correct for picture transmission errors. The data words used during this mission were 6 bits long, which represented 64 [[grayscale]] values.
Because of limitations of the quality of the alignment of the transmitter the maximum useful data length was about 30 bits. Instead of using a [[repetition code]], a [32, 6, 16] Hadamard code was used. Errors of up to 7 bits per word could be corrected using this scheme. Compared to a 5-[[repetition code]], the error correcting properties of this Hadamard code are much better, yet its rate is comparable.
The efficient decoding algorithm was an important factor in the decision to use this code.  The circuitry used was called the "Green Machine". It employed the [[fast Fourier transform]] which can increase the decoding speed by a factor of 3.
Since the 1990s use of this code by space programs has more or less ceased, and the Deep Space Network does not support this error correction scheme for its dishes that are greater than 26m.
 
==Constructions==
While all Hadamard codes are based on Hadamard matrices, the constructions differ in subtle ways for different scientific fields, authors, and uses.
Engineers, who use the codes for data transmission, and [[coding theory|coding theorists]], who analyze extremal properties of codes, typically want the [[Block_code#The_rate_R|rate]] of the code to be as high as possible, even if this means that the construction becomes mathematically slightly less elegant.
On the other hand, for many applications of Hadamard codes in [[theoretical computer science]] it is not so important to achieve the optimal rate, and hence simpler constructions of Hadamard codes are preferred since they can be analyzed more elegantly.
 
=== Construction using inner products ===
 
When given a binary message <math>x\in\{0,1\}^k</math> of length <math>k</math>, the Hadamard code encodes the message into a codeword <math>\text{Had}(x)</math> using an encoding function <math>\text{Had} : \{0,1\}^k\to\{0,1\}^{2^k}</math>.
This function makes use of the [[inner product]] <math>\langle x , y \rangle</math> of two vectors <math>x,y\in\{0,1\}^k</math>, which is defined as follows:
:<math>\langle x , y \rangle = \sum_{i=1}^{k} x_i y_i\ \bmod\ 2\,.</math>
Then the Hadamard encoding of <math>x</math> is defined as the sequence of ''all'' inner products with <math>x</math>:
:<math>\text{Had}(x) = \Big(\langle x , y \rangle\Big)_{y\in\{0,1\}^k}</math>
 
As mentioned above, the ''punctured'' Hadamard code is used in practice since the Hadamard code itself is somewhat wasteful.
This is because, if the first bit of <math>y</math> is zero, <math>y_1=0</math>, then the inner product contains no information whatsoever about <math>x_1</math>, and hence, it is impossible to fully decode <math>x</math> from those positions of the codeword alone.
On the other hand, when the codeword is restricted to the positions where <math>y_1=1</math>, it is still possible to fully decode <math>x</math>.
Hence it makes sense to restrict the Hadamard code to these positions, which gives rise to the ''punctured'' Hadamard encoding of <math>x</math>; that is, <math>\text{pHad}(x) = \Big(\langle x , y \rangle\Big)_{y\in\{1\}\times\{0,1\}^{k-1}}</math>.
 
=== Construction using a generator matrix ===
 
The Hadamard code is a linear code, and all linear codes can be generated by a generator matrix <math>G</math>.
This is a matrix such that <math>\text{Had}(x)= x\cdot G</math> holds for all <math>x\in\{0,1\}^k</math>, where the message <math>x</math> is viewed as a row vector and the vector-matrix product is understood in the [[vector space]] over the [[finite field]] <math>\mathbb F_2</math>.
In particular, an equivalent way to write the inner product definition for the Hadamard code arises by using the generator matrix whose columns consist of ''all'' strings <math>y</math> of length <math>k</math>, that is,
 
:<math>G =
\begin{pmatrix}
\uparrow & \uparrow & & \uparrow\\
y_1 & y_2 & \dots & y_{2^k} \\
\downarrow & \downarrow & & \downarrow
\end{pmatrix}\,.</math>
 
where <math>y_i \in \{0,1\}^k</math> is the <math>i</math>-th binary vector in [[lexicographical order]].
For example, the generator matrix for the Hadamard code of dimension <math>k=3</math> is
:<math>
G =
\begin{bmatrix}
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1\\
0 & 0 & 1 & 1 & 0 & 0 & 1 & 1\\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1
\end{bmatrix}.
</math>
 
The matrix <math>G</math> is a <math>(k\times 2^k)</math>-matrix and gives rise to the [[linear operator]] <math>\text{Had}:\{0,1\}^k\to\{0,1\}^{2^k}</math>.
 
The generator matrix of the ''punctured'' Hadamard code is obtained by restricting the matrix <math>G</math> to the columns whose first entry is one.
For example, the generator matrix for the punctured Hadamard code of dimension <math>k=3</math> is
:<math>
G' =
\begin{bmatrix}
1 & 1 & 1 & 1\\
0 & 0 & 1 & 1\\
0 & 1 & 0 & 1
\end{bmatrix}.
</math>
Then <math>\text{pHad}:\{0,1\}^k\to\{0,1\}^{2^{k-1}}</math> is a linear mapping with <math>\text{pHad}(x)= x \cdot G'</math>.
 
For general <math>k</math>, the generator matrix of the punctured Hadamard code is a [[parity-check matrix]] for the [[Hamming code#Hamming codes with additional parity (SECDED)|extended Hamming code]] of length <math>2^{k-1}</math> and dimension <math>2^{k-1}-k</math>, which makes the punctured Hadamard code the [[dual code]] of the extended Hamming code.
Hence an alternative way to define the Hadamard code is in terms of its parity-check matrix: the parity-check matrix of the Hadamard code is equal to the generator matrix of the Hamming code.
 
=== Construction using general Hadamard matrices ===
 
Generalized Hadamard codes are obtained from an ''n''-by-''n'' [[Hadamard matrix]] ''H''.
In particular, the 2''n'' codewords of the code are the rows of ''H'' and the rows of −''H''.
To obtain a code over the alphabet {0,1}, the mapping −1&nbsp;↦&nbsp;1, 1&nbsp;↦&nbsp;0, or, equivalently, ''x''&nbsp;↦&nbsp;(1&nbsp;&minus;&nbsp;''x'')/2, is applied to the matrix elements.
That the minimum distance of the code is ''n''/2 follows from the defining property of Hadamard matrices, namely that their rows are mutually orthogonal.
This implies that two distinct rows of a Hadamard matrix differ in exactly ''n''/2 positions, and, since negation of a row does not affect orthogonality, that any row of ''H'' differs from any row of −''H'' in ''n''/2 positions as well, except when the rows correspond, in which case they differ in ''n'' positions.
 
To get the punctured Hadamard code above with <math>n=2^{k-1}</math>, the chosen Hadamard matrix ''H'' has to be of Sylvester type, which gives rise to a message length of <math>\log_2(2n)=k</math>.
 
==Distance==
The distance of a code is the minimum [[Hamming distance]] between any two distinct codewords, i.e., the minimum number of positions at which two distinct codewords differ.
Since the Walsh–Hadamard code is a [[linear code]], the distance is equal to the minimum [[Hamming weight]] among all of its non-zero codewords. All non-zero codewords of the Walsh–Hadamard code have a [[Hamming weight]] of exactly <math>2^{k-1}</math> by the following argument.
 
Let <math>x\in\{0,1\}^k</math> be a non-zero message.
Then the following value is exactly equal to the fraction of positions in the codeword that are equal to one:
:<math>\Pr_{y\in\{0,1\}^k} \big[ (\text{Had}(x))_y = 1 \big] = \Pr_{y\in\{0,1\}^k} \big[ \langle x,y\rangle = 1 \big]\,.</math>
The fact that the latter value is exactly <math>1/2</math> is called the ''random subsum principle''.
To see that it is true, assume without loss of generality that <math>x_1=1</math>.
Then, when conditioned on the values of <math>y_2,\dots,y_k</math>, the event is equivalent to <math>y_1 \cdot x_1 = b</math> for some <math>b\in\{0,1\}</math> depending on  <math>x_2,\dots,x_k</math> and <math>y_2,\dots,y_k</math>.
The probability that <math>y_1=b</math> happens is exactly <math>1/2</math>.
Thus, in fact, ''all'' non-zero codewords of the Hadamard code have relative Hamming weight <math>1/2</math>, and thus, its relative distance is <math>1/2</math>.
 
The relative distance of the ''punctured'' Hadamard code is <math>1/2</math> as well, but it no longer has the property that every non-zero codeword has weight exactly <math>1/2</math> since the all <math>1</math>s vector <math>1^{2^{k-1}}</math> is a codeword of the punctured Hadamard code.
This is because the vector <math>x=10^{k-1}</math> encodes to <math>\text{pHad}(10^{k-1}) = 1^{2^{k-1}}</math>.
Furthermore, whenever <math>x</math> is non-zero and not the vector <math>10^{k-1}</math>, the random subsum principle applies again, and the relative weight of <math>\text{Had}(x)</math> is exactly <math>1/2</math>.
 
==Local Decodability ==
 
A [[locally decodable]] code is a code that allows a single bit of the original message to be recovered with high probability by only looking at a small portion of the received word. A code is <math>q</math>-query [[locally decodable]] if a message bit, <math>x_i</math>, can be recovered by checking <math>q</math> bits of the received word. More formally, a code, <math>C: \{0,1\}^k \rightarrow \{0,1\}^n</math>,  is <math>(q, \delta\geq 0, \epsilon\geq 0)</math>-locally decodable, if there exists a probabilistic decoder, <math>D:\{0,1\}^n \rightarrow \{0,1\}^k</math>, such that ''(Note: <math>\Delta(x,y)</math> represents the [[Hamming distance]] between vectors <math>x</math> and <math>y</math>)'':
 
<math>\forall x \in \{0,1\}^k, \forall y \in \{0,1\}^n</math>, <math>\Delta(y, C(x)) \leq \delta n</math> implies that <math>Pr[D(y)_i = x_i] \geq \frac{1}{2} + \epsilon, \forall i \in [k]</math>
 
'''Theorem 1:''' The Walsh–Hadamard code is <math>(2, \delta, \frac{1}{2}-2\delta)</math>-locally decodable for <math>0\leq \delta \leq \frac{1}{4}</math>.
 
'''Lemma 1:''' For all codewords, <math>c</math> in a Walsh–Hadamard code, <math>C</math>, <math>c_i+c_j=c_{i+j}</math>, where <math>c_i, c_j</math> represent the bits in <math>c</math> in positions <math>i</math> and <math>j</math> respectively, and <math>c_{i+j}</math> represents the bit at position <math>(i+j)</math>.
 
===Proof of Lemma 1===
----
Let <math>C(x) = c = (c_0,\dots,c_{2^n-1})</math> be the codeword in <math>C</math> corresponding to message <math>x</math>.
 
Let <math>G =
\begin{pmatrix}
\uparrow & \uparrow & & \uparrow\\
g_0 & g_1 & \dots & g_{2^n-1} \\
\downarrow & \downarrow & & \downarrow
\end{pmatrix}</math> be the generator matrix of <math>C</math>.
 
By definition, <math>c_i = x\cdot g_i</math>. From this, <math>c_i+c_j = x\cdot g_i + x\cdot g_j = x\cdot(g_i+g_j)</math>. By the construction of <math>G</math>, <math>g_i + g_j = g_{i+j}</math>. Therefore, by substitution, <math>c_i + c_j = x\cdot g_{i+j} = c_{i+j}</math>.
 
===Proof of Theorem 1===
----
To prove theorem 1 we will construct a decoding algorithm and prove its correctness.
 
====Algorithm====
'''Input:''' Received word <math>y = (y_0, \dots, y_{2^n-1})</math>
 
For each <math>i \in \{1, \dots, n\}</math>:
# Pick <math>j \in \{0, \dots, 2^n-1\}</math> independently at random
# Pick <math>k \in \{0, \dots, 2^n-1\}</math> such that <math>j+k = e_i</math> where <math>j+k</math> is the bitwise ''xor'' of <math>j</math> and <math>k</math>.
# <math>x_i \gets y_j+y_k</math>
 
'''Output:''' Message <math>x = (x_1, \dots, x_n)</math>
 
====Proof of correctness====
For any message, <math>x</math>, and received word <math>y</math> such that <math>y</math> differs from <math>c = C(x)</math> on at most <math>\delta</math> fraction of bits, <math>x_i</math> can be decoded with probability at least <math>\frac{1}{2}+(1-2\delta)</math>.
 
By lemma 1, <math>c_j+c_k = c_{j+k} = x\cdot g_{j+k} = x\cdot e_i = x_i</math>. Since <math>j</math> and <math>k</math> are picked uniformly, the probability that <math>y_j \not = c_j</math> is  at most <math>\delta</math>. Similarly, the probability that <math>y_k \not = c_k</math> is at most <math>\delta</math>. By the [[union bound]], the probability that either <math>y_j</math> or <math>y_k</math> do not match the corresponding bits in <math>c</math> is at most <math>2\delta</math>. If both <math>y_j</math> and <math>y_k</math> correspond to <math>c</math>, then lemma 1 will apply, and therefore, the proper value of <math>x_i</math> will be computed. Therefore the probability <math>x_i</math> is decoded properly is at least <math>1-2\delta</math>. Therefore, <math>\epsilon = \frac{1}{2} - 2\delta</math> and for <math>\epsilon</math> to be positive, <math>0 \leq \delta \leq \frac{1}{4}</math>.
 
Therefore, the Walsh–Hadamard code is <math>(2, \delta, \frac{1}{2}-2\delta)</math> locally decodable for <math>0\leq \delta \leq \frac{1}{4}</math>
 
==Optimality==
For ''k''&nbsp;≤&nbsp;7 the linear Hadamard codes have been proven optimal in the sense of minimum distance.<ref>''Optimal binary linear codes of dimension at most seven'', David B. Jaffe, Iliya Bouyukliev. [http://www.math.unl.edu/~djaffe2/papers/sevens.html]</ref>
 
== Notes ==
{{Reflist}}
 
== References ==
 
* {{Citation
| last1=Amadei  | first1=M.
| last2=Manzoli | first2=U.
| last3=Merani  | first3=M.L.
| title=On the assignment of Walsh and quasi-orthogonal codes in a multicarrier DS-CDMA system with multiple classes of users
| publisher=[[Cambridge University Press|Cambridge]]
| year=2002
| pages=841–845
| booktitle=Global Telecommunications Conference, 2002. GLOBECOM'02. IEEE
| volume=1
| ref=harv
}}
* {{Citation
| last1=Arora | first1=Sanjeev | authorlink1=Sanjeev Arora
| last2=Barak | first2=Boaz
| title=Computational Complexity: A Modern Approach
| url = http://www.cs.princeton.edu/theory/complexity/
| publisher=[[Cambridge University Press|Cambridge]]
| year=2009
| isbn=978-0-521-42426-4
| ref=harv
}}
* {{Citation
| last1=Guruswami | first1=Venkatesan
| title=List decoding of binary codes
| year=2009
| url=http://www.cs.cmu.edu/~venkatg/pubs/papers/ld-binary-ms.pdf
| ref=harv
}}
* Lecture notes of Atri Rudra: [http://www.cse.buffalo.edu/faculty/atri/courses/coding-theory/lectures/lect4.pdf Hamming code and Hamming bound]
 
{{CCSDS}}
 
[[Category:Coding theory]]
[[Category:Error detection and correction]]
 
[[de:Walsh-Code]]
[[ja:直交符号]]

Latest revision as of 07:29, 19 October 2014

I'm Luther but I by no means truly liked that title. Kansas is where her house is but she requirements to transfer because of her family. Bookkeeping is what he does. One of his preferred hobbies is playing crochet but he hasn't produced a dime with it.

Here is my page ... car warranty; more helpful hints,