Symmetric polynomial: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>J.Gowers
en>Wikfr
Copyedit (minor)
Line 1: Line 1:
In [[mathematics]], specifically in [[commutative algebra]], the '''elementary symmetric polynomials''' are one type of basic building block for [[symmetric polynomial]]s, in the sense that any symmetric polynomial ''P'' can be expressed as a polynomial in elementary symmetric polynomials: ''P'' can be given by an expression involving only additions and multiplication of constants and elementary symmetric polynomials. There is one elementary symmetric polynomial of degree ''d'' in ''n'' variables for any  ''d'' ≤ ''n'', and it is formed by adding together all distinct products of ''d'' distinct variables.
Wilber Berryhill is what his wife enjoys to contact him and he totally enjoys this name. Distributing manufacturing is where her primary earnings arrives from. North Carolina is where we've been living for many years and will by no means transfer. The favorite hobby for him and his kids is to perform lacross and he would by no means give it up.<br><br>Visit my website: [http://askmahesh.com/members/lashaabbott/activity/77575/ psychics online]
 
==Definition==
 
The elementary symmetric polynomials in <math>n</math> variables ''X''<sub>1</sub>, …, ''X''<sub>''n''</sub>, written ''e''<sub>''k''</sub>(''X''<sub>1</sub>, …, ''X''<sub>''n''</sub>) for ''k'' = 0, 1, ..., ''n'', can be defined as
:<math>\begin{align}
  e_0 (X_1, X_2, \dots,X_n) &= 1,\\
  e_1 (X_1, X_2, \dots,X_n) &= \textstyle\sum_{1 \leq j \leq n} X_j,\\
  e_2 (X_1, X_2, \dots,X_n) &= \textstyle\sum_{1 \leq j < k \leq n} X_j X_k,\\
  e_3 (X_1, X_2, \dots,X_n) &= \textstyle\sum_{1 \leq j < k < l \leq n} X_j X_k X_l,\\
\end{align}</math>
and so forth, down to  
:<math> e_n (X_1, X_2, \dots,X_n) = X_1 X_2 \ldots X_n</math>
(sometimes the notation σ<sub>''k''</sub> is used instead of ''e''<sub>''k''</sub>).
In general, for ''k''&nbsp;≥&nbsp;0 we define
: <math> e_k (X_1 , \ldots , X_n )=\sum_{1\le  j_1 < j_2 < \ldots < j_k \le n} X_{j_1} \dotsm X_{j_k}.</math>
 
Thus, for each positive integer <math>k,</math> less than or equal to <math>n</math>, there exists exactly one elementary symmetric polynomial of degree <math>k</math> in <math>n</math> variables.  To form the one which has degree <math>k</math>, we form all products of <math>k</math>-subsets of the <math>n</math> variables and add up these terms. 
 
The fact that <math>X_1X_2=X_2X_1</math> and so forth is the defining feature of commutative algebra.  That is, the [[polynomial ring]] formed by taking all linear combinations of products of the elementary symmetric polynomials is a [[commutative ring]].
 
==Examples==
The following lists the ''n'' elementary symmetric polynomials for the first four positive values of&nbsp;''n''.  (In every case, ''e''<sub>0</sub>&nbsp;=&nbsp;1 is also one of the polynomials.)
 
For ''n''&nbsp;=&nbsp;1:
:<math>e_1(X_1) = X_1.\,</math>
 
For ''n''&nbsp;=&nbsp;2:
:<math>\begin{align}
e_1(X_1,X_2)  &= X_1 + X_2,\\ 
e_2(X_1,X_2) &= X_1X_2.\,\\
\end{align}</math>
 
For ''n''&nbsp;=&nbsp;3:
:<math>\begin{align}
e_1(X_1,X_2,X_3) &= X_1 + X_2 + X_3,\\
e_2(X_1,X_2,X_3) &= X_1X_2 + X_1X_3 + X_2X_3,\\
e_3(X_1,X_2,X_3) &= X_1X_2X_3.\,\\
\end{align}</math>
 
For ''n''&nbsp;=&nbsp;4:
:<math>\begin{align}
e_1(X_1,X_2,X_3,X_4) &= X_1 + X_2 + X_3 + X_4,\\
e_2(X_1,X_2,X_3,X_4) &= X_1X_2 + X_1X_3 + X_1X_4 + X_2X_3 + X_2X_4 + X_3X_4,\\
e_3(X_1,X_2,X_3,X_4) &= X_1X_2X_3 + X_1X_2X_4 + X_1X_3X_4 + X_2X_3X_4,\\
e_4(X_1,X_2,X_3,X_4) &= X_1X_2X_3X_4.\,\\
\end{align}</math>
 
==Properties==
 
The elementary symmetric polynomials appear when we expand a linear factorization of a monic polynomial: we have the identity
:<math>\prod_{j=1}^n ( \lambda-X_j)=\lambda^n-e_1(X_1,\ldots,X_n)\lambda^{n-1}+e_2(X_1,\ldots,X_n)\lambda^{n-2}-\cdots+(-1)^n e_n(X_1,\ldots,X_n).</math>
That is, when we substitute numerical values for the variables <math>X_1,X_2,\dots,X_n</math>, we obtain the monic [[univariate]] polynomial (with variable λ) whose roots are the values substituted for <math>X_1,X_2,\dots,X_n</math> and whose coefficients are the elementary symmetric polynomials.
 
The [[characteristic polynomial]] of a [[linear operator]] is an example of this. The roots are the eigenvalues of the operator.  When we substitute these eigenvalues into the elementary symmetric polynomials, we obtain the coefficients of the characteristic polynomial, which are numerical invariants of the operator.  This fact is useful in [[linear algebra]] and its applications and generalizations, like [[tensor algebra]] and disciplines which extensively employ tensor fields, such as [[differential geometry]].
 
The set of elementary symmetric polynomials in <math>n</math> variables [[generator (mathematics)|generates]] the [[polynomial ring|ring]] of [[symmetric polynomial]]s in <math>n</math> variables.  More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring <math>\mathbb Z[e_1(X_1,\ldots,X_n),\ldots,e_n(X_1,\ldots,X_n)].</math>  (See below for a more general statement and proof.)  This fact is one of the foundations of [[invariant theory]].  For other systems of symmetric polynomials with a similar property see [[power sum symmetric polynomial]]s and [[complete homogeneous symmetric polynomial]]s.
 
== The fundamental theorem of symmetric polynomials ==<!-- This section is linked from [[Abstract algebra]] -->
 
For any commutative [[ring (mathematics)|ring]] ''A'' denote the ring of symmetric polynomials in the variables <math> X_1,\ldots,X_n </math> with coefficients in ''A'' by <math> A[X_1,\ldots,X_n]^{S_n} </math>.  
:<math> A[X_1,\ldots,X_n]^{S_n} </math> is a polynomial ring in the ''n'' elementary symmetric polynomials <math> e_k (X_1 , \ldots ,X_n ) </math> for ''k'' = 1, ..., ''n''.
(Note that <math>e_0</math> is not among these polynomials; since <math>e_0=1</math>, it cannot be member of ''any'' set of algebraically independent elements.)
 
This means that every symmetric polynomial <math> P(X_1,\ldots, X_n) \in
A[X_1,\ldots,X_n]^{S_n}</math> has a unique representation
:<math> P(X_1,\ldots, X_n)=Q(e_1(X_1 , \ldots ,X_n), \ldots, e_n(X_1 , \ldots ,X_n)) </math>
for some polynomial <math> Q \in A[Y_1,\ldots,Y_n] </math>.
Another way of saying the same thing is that <math> A[X_1,\ldots,X_n]^{S_n} </math> is isomorphic to the polynomial ring <math>A[Y_1,\ldots,Y_n]</math> through an isomorphism that sends <math>Y_k</math> to <math>e_k(X_1 , \ldots ,X_n)</math> for <math>k=1,\ldots,n</math>.
 
=== Proof sketch ===
The theorem may be proved for symmetric [[homogeneous polynomial]]s by a double [[mathematical induction]] with respect to the number of variables ''n'' and, for fixed ''n'', with respect to the [[degree of a polynomial|degree]] of the homogeneous polynomial. The general case then follows by splitting an arbitrary symmetric polynomial into its homogeneous components (which are again symmetric).
 
In the case ''n'' = 1 the result is obvious because every polynomial in one variable is automatically symmetric.
 
Assume now that the theorem has been proved for all polynomials for <math> m < n </math> variables and all symmetric polynomials in ''n'' variables with degree &lt; ''d''. Every homogeneous symmetric polynomial ''P'' in <math> A[X_1,\ldots,X_n]^{S_n} </math> can be decomposed as a sum of homogeneous symmetric polynomials
:<math> P(X_1,\ldots,X_n)= P_{\mbox{lacunary}} (X_1,\ldots,X_n)  + X_1 \cdots X_n \cdot Q(X_1,\ldots,X_n). </math>
Here the "lacunary part" <math> P_{\mbox{lacunary}} </math> is defined as the sum of all monomials in ''P'' which contain only a proper subset of the ''n'' variables ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub>, i.e., where at least one variable ''X''<sub>''j''</sub> is missing.
 
Because ''P'' is symmetric, the lacunary part is determined by its terms containing only the variables ''X''<sub>1</sub>, ..., ''X''<sub>''n''&minus;1</sub>, i.e., which do not contain ''X''<sub>''n''</sub>. These are precisely the terms that survive the operation of setting ''X''<sub>''n''</sub> to&nbsp;0, so their sum equals <math>P(X_1, \ldots,X_{n-1},0) </math>, which is a symmetric polynomial in the variables ''X''<sub>1</sub>, ..., ''X''<sub>''n''&minus;1</sub> that we shall denote by <math> \tilde{P}(X_1, \ldots, X_{n-1})</math>. By the inductive assumption, this polynomial can be written as
:<math> \tilde{P}(X_1, \ldots, X_{n-1})=\tilde{Q}(\sigma_{1,n-1}, \ldots, \sigma_{n-1,n-1})</math>
for some <math> \tilde{Q}</math>. Here the doubly indexed <math> \sigma_{j,n-1} </math> denote the elementary symmetric polynomials in ''n''&minus;1 variables.
 
Consider now the polynomial
:<math> R(X_1, \ldots, X_{n}):= \tilde{Q}(\sigma_{1,n}, \ldots, \sigma_{n-1,n}) \ .</math>
Then <math>R(X_1, \ldots, X_{n})</math> is a symmetric polynomial in ''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub>, of the same degree as <math> P_{\mbox{lacunary}}</math>, which satisfies
:<math>R(X_1, \ldots, X_{n-1},0) = \tilde{Q}(\sigma_{1,n-1}, \ldots, \sigma_{n-1,n-1}) = P(X_1, \ldots,X_{n-1},0)</math>
(the first equality holds because setting ''X''<sub>''n''</sub> to&nbsp;0 in <math>\sigma_{j,n}</math> gives <math>\sigma_{j,n-1}</math>, for all <math>j<n</math>), in other words, the lacunary part of ''R'' coincides with that of the original polynomial ''P''. Therefore the difference ''P''&minus;''R'' has no lacunary part, and is therefore divisible by the product <math> X_1 \cdots X_n</math> of all variables, which equals the elementary symmetric polynomial <math>\sigma_{n,n}</math>. Then writing <math>P-R=\sigma_{n,n}\,Q</math>, the quotient ''Q'' is a homogeneous symmetric polynomial of degree less than ''d'' (in fact degree at most ''d'' − ''n'') which by the inductive assumption can be expressed as a polynomial in the elementary symmetric functions. Combining the representations for ''P''&minus;''R'' and ''R'' one finds a polynomial representation for ''P''.
 
The uniqueness of the representation can be proved inductively in a similar way.  (It is equivalent to the fact that the ''n'' polynomials <math> e_1, \ldots, e_n </math> are [[algebraically independent]] over the ring ''A''.)
The fact that the polynomial representation is unique implies that <math> A[X_1,\ldots,X_n]^{S_n} </math> is isomorphic to <math> A[Y_1,\ldots,Y_n] </math>.
 
=== An alternative proof ===
 
The following proof is also inductive, but does not involve other polynomials than those symmetric in {{mvar|''X''<sub>1</sub>}},...,{{mvar|''X''<sub>''n''</sub>}}, and also leads to a fairly direct procedure to effectively write a symmetric polynomial as a polynomial in the elementary symmetric ones. Assume the symmetric polynomial to be homogenous of degree {{mvar|''d''}}; different homogeneous components can be decomposed separately. Order the [[monomial]]s in the variables {{mvar|''X''<sub>''i''</sub>}} [[lexicographic order|lexicographically]], where the individual variables are ordered {{math|''X''<sub>1</sub>&gt;…&gt;''X''<sub>''n''</sub>}}, in other words the dominant term of a polynomial is one with the highest occurring power of {{math|''X''<sub>1</sub>}}, and among those the one with the highest power of {{math|''X''<sub>2</sub>}}, etc. Furthermore parametrize all products of elementary symmetric polynomials that have degree {{math|''d''}} (they are in fact homogeneous) as follows by [[integer partition|partitions]] of {{math|''d''}}. Order the individual elementary symmetric polynomials {{math|''e''<sub>''i''</sub>(''X''<sub>1</sub>,…,''X''<sub>''n''</sub>)}} in the product so that those with larger indices {{mvar|''i''}} come first, then build for each such factor a column of {{mvar|''i''}} boxes, and arrange those columns from left to right to form a [[Young diagram]] containing {{mvar|''d''}} boxes in all. The shape of this diagram is a partition of {{mvar|''d''}}, and each partition {{mvar|''λ''}} of {{math|''d''}} arises for exactly one product of elementary symmetric polynomials, which we shall denote by {{math|''e''<sub>''λ''<sup>t</sup></sub> (''X''<sub>1</sub>}},…,{{math|''X''<sub>''n''</sub>}}) (the "t" is present only because traditionally this product is associated to the transpose partition of {{mvar|''λ''}}). The essential ingredient of the proof is the following simple property, which uses [[monomial#Notation|multi-index notation]] for monomials in the variables {{math|''X''<sub>''i''</sub>}}.
 
'''Lemma'''. The leading term of {{math|''e''<sub>''λ''<sup>t</sup></sub>&nbsp;(''X''<sub>1</sub>,…,''X''<sub>''n''</sub>)}} is {{math|''X''<sup>''λ''</sup>}}.
 
:''Proof''. To get the leading term of the product one must select the leading term in each factor {{math|''e''<sub>''i''</sub>(''X''<sub>1</sub>,…,''X''<sub>''n''</sub>)}}{{why|date=March 2013}}, which is clearly {{math|''X''<sub>1</sub>''X''<sub>2</sub>…''X''<sub>''i''</sub>}}, and multiply these together. To count the occurrences of the individual variables in the resulting monomial, fill the column of the Young diagram corresponding to the factor concerned with the numbers 1…,{{mvar|''i''}} of the variables, then all boxes in the first row contain 1, those in the second row 2, and so forth, which means the leading term is {{math|''X''<sup>''λ''</sup>}} (its coefficient is 1 because there is only one choice that leads to this monomial).
 
Now one proves by induction on the leading monomial in lexicographic order, that any nonzero homogenous symmetric polynomial {{mvar|''P''}} of degree {{mvar|''d''}} can be written as polynomial in the elementary symmetric polynomials. Since {{mvar|''P''}} is symmetric, its leading monomial has weakly decreasing exponents, so it is some {{math|''X''<sup>''λ''</sup>}} with {{mvar|''λ''}} a partition of {{math|''d''}}. Let the coefficient of this term be {{mvar|''c''}}, then {{math|''P'' − ''ce''<sub>''λ''<sup>t</sup></sub> (''X''<sub>1</sub>,…,''X''<sub>''n''</sub>)}} is either zero or a symmetric polynomial with a strictly smaller leading monomial. Writing this difference inductively as a polynomial in the elementary symmetric polynomials, and adding back {{math|''ce''<sub>''λ''<sup>t</sup></sub> (''X''<sub>1</sub>,…,''X''<sub>''n''</sub>)}} to it, one obtains the sought for polynomial expression for {{math|''P''}}.
 
The fact that this expression is unique, or equivalently that all the products (monomials) {{math|''e''<sub>''λ''<sup>t</sup></sub>&nbsp;(''X''<sub>1</sub>,…,''X''<sub>''n''</sub>)}} of elementary symmetric polynomials are linearly independent, is also easily proved. The lemma shows that all these products have different leading monomials, and this suffices: if a nontrivial linear combination of the {{math|''e''<sub>''λ''<sup>t</sup></sub> (''X''<sub>1</sub>,…,''X''<sub>''n''</sub>)}} were zero, one focusses on the contribution in the linear combination with nonzero coefficient and with (as polynomial in the variables {{math|''X''<sub>''i''</sub>}}) the largest leading monomial; the leading term of this contribution cannot be cancelled by any other contribution of the linear combination, which gives a contradiction.
 
===A self-contained algorithmic proof===
 
The following proof of the existence (not of the uniqueness) of ''Q'' is the same as the above, but rewritten in elementary terms and with slightly different choice of lexicographic order.
 
The symmetric polynomial <math>P(x_1,\ldots,x_n)\,\!</math> is a sum of monomials of the form <math>cx_1^{i_1}\ldots x_n^{i_n}\,\!</math>, where the <math>i_j\,\!</math> are nonnegative integers and <math>c\,\!</math> is a scalar (i. e., an element of our ring ''A''). We define a partial order on the monomials by specifying that
:<math>c_1x_1^{i_1}\ldots x_n^{i_n} < c_2x_1^{j_1}\ldots x_n^{j_n}\,\!</math>
if <math>c_2\ne0\,\!</math> and there is some <math>0\leq k \leq n-1\,\!</math> such that
<math>i_{n-l}=j_{n-l}\,\!</math> for
<math>l=0,1,\ldots ,k-1\,\!</math> but <math>i_{n-k}<j_{n-k}\,\!</math>. For instance
<math>10x_1^2x_2^3x_3^4 < 2x_1^6x_2^4x_3^5</math> and
<math>3x_1^2x_2^4x_3^5 < -7x_1^4x_2^5x_3^5</math>. (You have probably realized that the coefficients <math>c_1</math> and <math>c_2</math> don't have any relevance in whether <math>c_1x_1^{i_1}\ldots x_n^{i_n} < c_2x_1^{j_1}\ldots x_n^{j_n}\,\!</math> or not, as long as they are nonzero. It is the exponents that matter.) In words, starting in the ''n''th position in
both monomials, go back until the two exponents are not equal. The monomial
with the larger exponent in that position is the larger monomial. This is
called a [[lexicographic order]] on the monomials.
 
We reduce ''P'' into elementary symmetric polynomials by successively
subtracting from ''P'' a product of elementary symmetric polynomials
eliminating the largest monomial according to this order without introducing
any larger monomials. This way, in each step, the largest monomial becomes
smaller and smaller until it becomes zero, and we are done: the sum of the
subtracted-off polynomials is the desired expression of ''P'' as a polynomial
function of elementary polynomials.
 
Here is how each step of this algorithm works: Suppose <math>c x_1^{i_1}\ldots x_n^{i_n}\,\!</math> is the largest monomial in
''P''. Then we must have <math>i_1 \leq i_2 \leq ... \leq i_n</math>, since otherwise this monomial could not be the largest one of ''P'' (in fact, due to ''P'' being symmetric, the polynomial ''P'' must also have the monomial <math>c x_1^{j_1}\ldots x_n^{j_n}\,\!</math> where <math>\left(j_1,j_2,\ldots, j_n\right)</math> is the sequence <math>\left(i_1,i_2,\ldots, i_n\right)</math> sorted in increasing order; but this monomial is larger than the monomial <math>c x_1^{i_1}\ldots x_n^{i_n}\,\!</math> unless we have <math>i_1 \leq i_2 \leq ... \leq i_n</math>). Thus, we can define a symmetric polynomial ''R'' by
:<math>
R = cs_1^{i_n-i_{n-1}}s_2^{i_{n-1}-i_{n-2}}\ldots s_{n-1}^{i_2-i_1}s_n^{i_1}\,\!
</math>
where <math>s_k</math> is the ''k''th elementary symmetric polynomial in the
''n'' variables <math>x_1,\ldots,x_n\,\!</math>. Clearly ''R'' is a polynomial in the elementary symmetric polynomials. Now we claim that the largest monomial of ''R'' is <math>c x_1^{i_1}\ldots x_n^{i_n}\,\!</math>. To prove this, we notice that the largest monomial of <math>
R = cs_1^{i_n-i_{n-1}}s_2^{i_{n-1}-i_{n-2}}\ldots s_{n-1}^{i_2-i_1}s_n^{i_1}\,\!
</math> is clearly equal to
 
:<math> c\left(\text{largest monomial of }s_1\right)^{i_n-i_{n-1}}</math>
:<math> \cdot\left(\text{largest monomial of }s_2\right)^{i_{n-1}-i_{n-2}}</math>
:<math> \cdot \ldots </math>
:<math> \cdot\left(\text{largest monomial of }s_{n-1}\right)^{i_2-i_1}</math>
:<math> \cdot\left(\text{largest monomial of }s_n\right)^{i_1}</math>
:<math> = c\left(x_n\right)^{i_n-i_{n-1}}\left(x_{n-1}x_n\right)^{i_{n-1}-i_{n-2}}\ldots \left(x_2x_3...x_n\right)^{i_2-i_1}\left(x_1x_2...x_n\right)^{i_1} </math>
 
(since the largest monomial of <math>s_i</math> is <math>x_{n-i+1}x_{n-i+2}...x_n</math> for every ''i'').
 
In this monomial, the variable <math>x_n\,\!</math> occurs with exponent <math>i_n</math> (since it occurs with exponent <math>i_n-i_{n-1}\,\!</math> in
the first term, <math>i_{n-1}-i_{n-2}\,\!</math> in the second term, and so on,
down to <math>i_1\,\!</math> times in the final term), the variable <math>x_{n-1}\,\!</math> occurs with exponent <math>i_{n-1}\,\!</math> (since it occurs with exponent <math>i_{n-1}-i_{n-2}\,\!</math> in the second term, <math>i_{n-2}-i_{n-3}\,\!</math> in the third term, and so on, down to <math>i_1\,\!</math> times in the final term), and so on for the remaining
variables. Hence, this monomial must be <math>c x_1^{i_1}\ldots x_n^{i_n}\,\!</math>. Thus we have shown that the largest monomial of ''R'' is <math>c x_1^{i_1}\ldots x_n^{i_n}\,\!</math>. Therefore, subtracting ''R'' from ''P'' eliminates the monomial <math>c x_1^{i_1}\ldots x_n^{i_n}\,\!</math>, and all monomials of ''P-R'' are smaller than the one just eliminated. Thus, we have found a polynomial ''R'', which is a polynomial in the symmetric polynomials, such that subtracting ''R'' from ''P'' leaves us with a new symmetric polynomial ''P-R'' whose largest monomial is smaller than that of ''P''. We can now continue the process until nothing remains in ''P''.
 
Here is an example of the above algorithm. Suppose <math>P(x_1,x_2)= (x_1 + 7x_1x_2 + x_2)^2\,\!</math>. Expanding
this into monomials, we get
:<math>
P=x_1^2 + 2x_1x_2 + 14x_1^2x_2 + x_2^2 + 14x_1x_2^2 + 49x_1^2x_2^2.
</math>
The largest monomial is <math>49x_1^2x_2^2</math>, so we subtract off
<math>49s_2^2</math>, getting
:<math>
P-49s_2^2 = x_1^2 + 2x_1x_2 + 14x_1^2x_2 + x_2^2 + 14x_1x_2^2.
</math>
Now the largest monomial is <math>14x_1x_2^2</math>, so we subtract off
<math>14s_1s_2</math>,
getting
:<math>
P-49s_2^2-14s_1s_2 = x_1^2 + 2x_1x_2 + x_2^2
</math>
Now the largest monomial is <math>x_2^2</math>, so we subtract off
<math>s_1^2</math>, getting
:<math>
P-49s_2^2-14s_1s_2-s_1^2 = 0.
</math>
This gives
:<math>
P(x_1,x_2) = 49s_2(x_1,x_2)^2+14s_1(x_1,x_2)s_2(x_1,x_2)+s_1(x_1,x_2)^2.\,
</math>
 
==See also==
 
*[[Symmetric polynomial]]
*[[Complete homogeneous symmetric polynomial]]
*[[Schur polynomial]]
*[[Newton's identities]]
*[[MacMahon Master theorem]]
*[[Symmetric function]]
*[[Representation theory]]
 
==References==
* [[I. G. Macdonald|Macdonald, I.G.]] (1995),  ''Symmetric Functions and Hall Polynomials'', second ed.  Oxford: Clarendon Press.  ISBN 0-19-850450-0 (paperback, 1998).
* [[Richard P. Stanley]] (1999), ''Enumerative Combinatorics'', Vol. 2.  Camridge: Cambridge University Press.  ISBN 0-521-56069-1
 
[[Category:Homogeneous polynomials]]
[[Category:Symmetric functions]]
[[Category:Articles containing proofs]]

Revision as of 23:18, 11 February 2014

Wilber Berryhill is what his wife enjoys to contact him and he totally enjoys this name. Distributing manufacturing is where her primary earnings arrives from. North Carolina is where we've been living for many years and will by no means transfer. The favorite hobby for him and his kids is to perform lacross and he would by no means give it up.

Visit my website: psychics online