# User talk:Maschen/Archive 4

## Disambiguation link notification for January 3

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Hamilton–Jacobi–Einstein equation, you added a link pointing to the disambiguation page Motion (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:21, 3 January 2013 (UTC)

## Disambiguation link notification for January 29

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Four-vector, you added a link pointing to the disambiguation page Differential (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:27, 29 January 2013 (UTC)

## Disambiguation link notification for February 5

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Four-vector, you added a link pointing to the disambiguation page Reference frame (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:33, 5 February 2013 (UTC)

## Disambiguation link notification for February 22

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Bargmann–Wigner equations, you added a link pointing to the disambiguation page Subspace (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:03, 22 February 2013 (UTC)

## Disambiguation link notification for March 1

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Special relativity, you added a link pointing to the disambiguation page Quantum theory (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:37, 1 March 2013 (UTC)

## Bargmann-Wigner equations

Hi!

I am in a speculative mood today:

I was just watching a lecture by Stephen Weinberg on Youtube: http://www.youtube.com/watch?v=dALaqjEViPc. That Higgs particle candidate apparently could have very high spin. It's even, that's all they know. (Weinberg mentions 0,2,4,6...) EDIT: In the end he says "I'll bet it's spin 0. (Higgs itself has spin 0.)"

I saw you added a link to Minimal coupling. I wonder if by "minimal coupling" isn't really meant the smallest possible "dimension" as a term in the Lagrangian that is Lorentz invariant, somehow counting powers and derivatives and perhaps spin. This is probably the same thing as "the usual" operator substitution in many cases, but I am curious about if there isn't a rigorous definition of these things out there.

Considering Lagrangian's of the form L(Ψ, DΨ, DDΨ, ..., Φ, DΦ, DDΦ, ..., A, DA, ...), the more Ds, the higher the power, and the higher the spin of the fields, the higher the "dimension" of the term would be. Any idea? All I know for sure is that problems with renormalization gets worse with "dimension" in this sense.

It could be that the smallest dimension gives problems already for spin 3/2. But, part of those problems may go away when adding higher terms (while creating new ones). The idea would be to just keep adding terms ad infinitum. These high terms are usually not included (anywhere) because 1) they create mathematical problems, 2) they are highly suppressed at ordinary energies. But, I can't see any deep reason that the Lagrangian should not contain terms allowed by symmetry.

Thesis: Nature doesn't really care about physicists getting more an more trouble cancelling mathematical infinities in the right way.

End of speculative mood. But it is fun to speculate a little. YohanN7 (talk) 19:44, 11 February 2013 (UTC)

Sounds very interesting!
Although are you sure you know what "power/dimension" means in this case? (Of what? The configuration space of the fields solved from the BW equations?) I have never come across this terminology in this context before... No doubt that the more complicated the Lagrangian, the harder renormalization gets, but if we're to make progress we need a solid definition of "dimension" in this case. Of course, a Lagrangian should contain terms which imply a symmetry also. But apparently the BW equations in the Lagrangian formalism is very troublesome anyway... The article would benefit more on the group theoretic approach and consider symmetries in the process.
About minimal coupling (as I understand it) - that's least you have to include for charged particles, and doesn't include spin interactions, just additional terms to the kinetic momentum and kinetic energy.
Out of interest, when you mention derivatives, spin, and the usual operators, would that be a covariant derivative operator including a spin connection (or anything close/related)?
Thanks for the speculation, very insightful of you! M∧Ŝc2ħεИτlk 20:17, 11 February 2013 (UTC)
I guess I meant by "dimension" what is called "Superficial Degree of Divergence" (if that is standard) in renormalization theory. Yep, derivative = covariant derivative, and power = the exponent in the usual sense (3 in φ3), and p -> p - eA for operator substitution. I meant nothing really goofy. I mean this: Nature (symmetries) may allow a 37-particle vertex in a Feynman diagram, while our Lagrangians don't. YohanN7 (talk) 22:17, 11 February 2013 (UTC)
Ok, thanks for clarification. A quick search on google scholar [1] for "Superficial Degree of Divergence" suggests it's standard terminology, but I'm not familiar with that either (yet)... M∧Ŝc2ħεИτlk 07:21, 12 February 2013 (UTC)
I have been bold. I made a few minor edits. I hope I didn't screw up too much. Just revert if that is the case.
There were a couple of things I didn't dare touching for now. Under the heading "Uncharged massive particles", the spin 0 case is mentioned. It is not easy to see (in that section) how this case comes about (it seems "empty"). There is apparently a step of "extracting subspaces" with appropriate spin. This problem is not present with the JW formulation, where the spin has been fixed. Also see my changes in the "Lorentz group structure" section. YohanN7 (talk) 14:43, 13 February 2013 (UTC)
Very nice editing!
About spin-0; see Jeffery (1978), p.145. I'm planning to add, at some point, a section which gives more detail to equations and wavefunctions for some special spin cases, in particular spins 0 (KG eqn), 1/2 (Dirac eqn etc.), 1 (Proca eqn etc.), 3/2 (RS eqn etc), 2 (Graviton ?). So that particular point on spin-0 is empty for now, but will be extended into a section with other cases too. See also Wavefunctions for Particles with Arbitrary Spin, Shi-Zhong et al. (it's freely available and already in the article).
When you wrote (in the Lorentz group structure section);
"This representation does not have definite spin unless s = {{ safesubst:#invoke:Unsubst||$B=1/2}} (or 0)." why is that? If it doesn't have definite spin, when it's supposed to correspond to a spin-s particle... Thanks, M∧Ŝc2ħεИτlk 15:45, 13 February 2013 (UTC) I knew you would spot that. Good! It needs work, I just don't know how yet. In general, when taking tensor products of reps of definite spin, the result is a space with several different spin subspaces. It can be seen this way: The representation defined here has dimension 42s = 4, 16, 64, ..., while only 2(2s+1) dimensions are "used up" by a relativistic spin s particle. If the particle wave function does "spread out" over all the 42s dimensions, then a Lorentz transformation will inevitably mix different spins. In the JW case, the representation itself ensures that only the "right" spin is there. In the BW case, the representation alone is not enough. We need the BW field equations as well to "lock" or "constrain" the particle in an appropriate subspace, otherwise different Lorentz observers will see different spins. This is analogous to that a rotated (ordinary rotation) particle will in general obtain a different spin z-component. By the way, I believe that this is pretty much exactly how free field equations come about; they constrain the particle to behave well under Lorentz transformations. Throw in the demand of causality, possibly prescribed behavior under parity, and the equations are unavoidable. This is how the free field Dirac equation itself comes about. I don't know how to formulate this in a good way. Obviously it can't just say "This representation does not have definite spin" We need to come up with something better. Cheers! YohanN7 (talk) 18:09, 13 February 2013 (UTC) Ok! I realize much of what you're saying, except not immediately realizing the 2(2s + 1) components due to symmetry and many components are superfluous, in the group choice. We need to be careful on the JW eqn btw - the exact form of that "generalized gamma" is subtle and not easy to find in the literature (or when it is, they are vague/confusing about it). Representations, and the general construction of relativistic wave equations (any and all - including the BW and JW eqns), i.e. exactly what we want, are covered in depth in this article Geometry of spacetime propagation of spinning particles, T Jaroszewicz, P.S Kurzepa (cited in the WP article, but unfortunately not freely available...), so currently looking to see if any free versions are available for all to see. (Btw, as good as it was to notify Quondum, he is not going to be active for a while)... Thanks, M∧Ŝc2ħεИτlk 21:46, 13 February 2013 (UTC) Here are the original references Weinberg himself gives in QFT vol 1: H.Joos, Fortschr. Phys. 10, 65 (1962); S Weinberg, Phys. Rev. 133, B1318 (1964). Regarding the dimensionality: (2s + 1) of the dimensions ought to be explained quite easily - the z-component of the spin can take on (2s + 1) values from -s to s. The extra factor of 2 is (I believe) due to the relativistic particle-antiparticle "duplication". Then whether one treats the "particle" and the "anti-particle" as living in the same home or in different homes is probably a matter of taste (or rather notation). (One can, of course not, Lorentz transform one into the other either way.) YohanN7 (talk) 01:30, 14 February 2013 (UTC) I made one more edit. It should clarify the statements "has definite spin" and "does not have definite spin" reasonably well. I found something else (Weinberg QFT vol 1) we could add later. I don't understand it yet, but here it is anyway: Any field (A,B) for a given particle of spin j can be expressed as a differential operator of rank 2B acting on the field φσ(x) of type (j,0) (or a differential operator of rank 2A acting on the field φσ(x) of type (0,j))... As I mentioned, I don't understand this in full, but I think it is like a generalization of the fact that the gradient of a scalar field (i.e. a (0,0) field) is 4-vector (i.e. a (1/2, 1/2) field. [The 4-vector contains both spin 0 and spin 1.] At least this hints an explanation of the multitude of descriptions of the BW equations. YohanN7 (talk) 05:15, 14 February 2013 (UTC) Hi again. I happened to stumble over Weinbergs original paper on the JW. This is definitely the correct one for us from the perspective of group theory. Like all other publications on the subject, it looks insanely difficult, but the difference is that this one begins from the beginning and does not start off with referring to notation and results in 20 out-of-print papers. Can you get at it? [Feynman Rules for Any Spin. Three parts, spread over several years.] YohanN7 (talk) 15:12, 14 February 2013 (UTC) I have glanced through the first half of the paper, and fortunately I recognize much of it. The JW equations are nothing but a special case, a (j, 0) representation accommodating a spin j particle, of the most general irreducible case of an (A,B) representation housing a spin j particle. The moral is this. A spin j particle is a spin j particle, not only under SO(3), but also under the full Lorentz group, regardless of whether the home is an (A,B) representation or an (C, D) representation of the Lorentz group! The difference appears in that the free-field equations are different. This explains that the difference between the JW and BW Lorentz choices is big only on the surface. As long as both are big enough to contain a spin j object, it can be embedded in either. The reasons that this happens is sketched in my old proposal, User:YohanN7/Representation theory of the Lorentz group, from "Transformation of linear operators" and downwards. The equation in that section is crucial (remember, Heisenberg picture): ${\displaystyle U(\Lambda )a^{\dagger }(p,\sigma ,n)U^{-1}(\Lambda )=\sum _{\sigma ^{\prime }}D_{\sigma ^{\prime }\sigma }(W(\Lambda ,p))a^{\dagger }(p_{\Lambda },\sigma ^{\prime },n).}$ This is the transformation rule for a creation/annihilation operator. The matrix Λ here is a Lorentz transformation, but W(Λ), the Wigner rotation, is an element of SO(3)! The field equations come about later as ${\displaystyle [{\mathcal {H}}(x),{\mathcal {H}}(y)]=0\Rightarrow [\psi _{l}(x),\psi _{l^{\prime }}^{\dagger }(y)]=0.}$ The free-field Dirac equation among others. The above equation is general. The above equivalence is illustrated by the equivalence between BW and JW outlined in that australian paper too. Most of this (e.g completely general (A,B) case) is in my book (now that I understand the notation), except for that mysterious generalized gamma-matrix which is specific to (j,0). But that is described in the paper. Other cool things that jump out just like that: The existence of anti-particles, and the spin-statistics theorem. I don't know how much of this ought to go into the article. Some explanation is needed in "Lorentz group structure", but it will be too much to tell the complete story. I'd personally like to continue building on my old proposal. This approach (no Lagrangian to canonically quantize, no preexisting field to second quantize) to QFT is missing entirely in Wikipedia. YohanN7 (talk) 20:55, 14 February 2013 (UTC) A lot to absorb! It'll take a bit of time... I'll look for Wienberg's paper and see through it - indeed it would be very perfect! (Out of interest, where did you find it?) And I like how you’re shaping the Lorentz group section - including the SO(3) group (which I didn't expect). The consequences can eventually be just placed in that section, with subheadings (e.x. for the spin-statistics theorem). Did you say you are on email? I could send all the papers you can't access as pdf's. (Unfortunatley can't seem to find the Jaroszewicz and Kurzepa paper as a free version). M∧Ŝc2ħεИτlk 21:24, 14 February 2013 (UTC) We can exchange email addresses, sure. How? The above really is a lot, so don't get stressed out trying to absorb everything at once if it's new to you. I was fortunate to recognize much of it since quite recent struggles I've had with it (just for fun). I need to call it a day now, so I won't reply until tomorrow. YohanN7 (talk) 21:53, 14 February 2013 (UTC) Just found the paper via google: Template:Cite paper and will add it to the article. Well found! About email - just click here to send one. All you need is an email you are comfortable to use with other WP editors. M∧Ŝc2ħεИτlk 22:16, 14 February 2013 (UTC) Just an aside on the generalized gamma tensor - you might see: Template:Cite article It has relevance to the (j, 0) ⊕ (0, j) representation, also to SL(2, C). M∧Ŝc2ħεИτlk 11:41, 15 February 2013 (UTC) ## Disambiguation link notification for February 13 Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Relativistic wave equations, you added a link pointing to the disambiguation page Matrix (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ 1. REDIRECT Template:• Join us at the DPL WikiProject. It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:17, 13 February 2013 (UTC) ## Bargmann-Wigner equations 2 Thanks for the link! Problem: The article doesn't render well on my machine. It says "Missing tex exe of sort" in red all over the place. Any idea? YohanN7 (talk) 17:22, 15 February 2013 (UTC) Strange... it is a scanned document of a 1960s paper, not produced in LaTeX so it has nothing to do with TeX rendering (which is suggestive of "tex exe")... I'll look for more versions. (Or maybe you could replace the link with the one you found, if it was online?) Also I finally managed to clutch Weinberg's QFT books (volumes 1 (1995) and 3 (2000), 2 is not currently available) from the uni library before some postgrad/doc (etc.) could, so discussion should be temporarily easier. Also - you are always welcome to my talk page, but I suggest we start to take any discussions to the talk page, so everything is in one place. There is already a "to-do" list, feel free to edit it. Thanks, M∧Ŝc2ħεИτlk 21:19, 15 February 2013 (UTC) Funny, I was thinking about moving the discussion there just a second before I read your suggestion. The TeX problem is gone (it was surely Tex, I meant the BW article). Besides, I used a workaround - I changed from "Render PnG" to MathJax in my preferences. It worked, and now I can switch back to "Render PnG" w/o problems. (It's faster...) You should have little problems using the first edition of QFT, since there were very few errors (I saw a list ones). I don't know how much differs in content. So lets move to the talk page. YohanN7 (talk) 02:57, 16 February 2013 (UTC) Right, as long as veiwing works!... M∧Ŝc2ħεИτlk 08:42, 16 February 2013 (UTC) By the way, I have rewritten most of the finite dimensional part of Representation theory of the Lorentz group. I'd like to hear what you think. (See that talk page too.) Also, I am planning to include this section User:YohanN7/Representation theory of the Lorentz group#A nontrivial example (spinors). Do you think it is understandable? YohanN7 (talk) 04:34, 16 February 2013 (UTC) I have indeed noticed your edits to representation theory of the Lorentz group, and they are an improvement in clarity, see the talk page for feedback. M∧Ŝc2ħεИτlk 08:42, 16 February 2013 (UTC) I have tried to clarify Representation theory of the Lorentz group#The group. Is it more transparent how group reps are obtained from algebra reps now? YohanN7 (talk) 23:20, 16 February 2013 (UTC) You've clearly put plenty of effort in, and now there's certainly more background as to how things fit together, but the main problem is just intrinsically my own understanding (of anything abstract)... There is just so much all at once... Much of it will still take time to absorb. Anyway, good work. M∧Ŝc2ħεИτlk 23:51, 16 February 2013 (UTC) ## Commutative Diagram I am going crazy (crazier) over this. I can't believe it's this complicated to create a diagram with a few arrows and ordinary text the year of 2013. The LaTex step is partly working. The ps output is fine, in the dvi output the horizontal arrows are missing. If I chose pdf output, all arrows are gone. Next step, in order to remove the surrounding whitespace, one needs a package called pdfcrop. Naturally, this exe invokes a Perl script. Thus find and install Perl. Perl (or something else) naturally depends on Ghostscript which I didn't have. Anyway, I think this step works. Then convert the stripped pdf file to svg. This requires software with a price tag of$550. Ah, well, there is a free trial. Thus I now have an svg file which at best is a commutative diagram without the arrows. I don't know, because you have to find a svg viewer to know that. I have tried two, Adobe SVG viewer, which runs for a while (it registers some Active X object), and then disappears. The other one I will not run because my antivirus software said "Removed Threat, no action necessary" when i downloaded it.

Question: Is there a svg viewer you can recommend for Windows XP? YohanN7 (talk) 19:18, 19 February 2013 (UTC)

For just viewing svg files: Inkscape should work? (Well it's tedious and quirky to use, but most of the time it gets the job done, definitely works on Windows XP and later, and is free to download)... I know - for some reason xy-matrix is also quirky recently (arrows not visible and/or bad pixelated graphics for the arrows, I used it a year or more ago and it worked very well then)...
For a temporary solution just export the diagram from the pdf to a png, svg conversion can be done later. Did you follow the link? Click on the image and use the code exactly as there (under "LaTeX source"), all the arrows should show up cleanly. (Btw, that original code was produced by someone else, tweaked later by me). M∧Ŝc2ħεИτlk 20:25, 19 February 2013 (UTC)
Yes! Thank you so much, it works insofar as I get arrows. (For svg viwer I simply use the web browser, smart huh?) Now only the problem of uploading remains. Shouldn't take more than a few days to make that work...
Thanks again! YohanN7 (talk) 22:07, 19 February 2013 (UTC)
So I uploaded. I tried it in my sandbox (Have a look at it if you want to.) It looks like s**t. In Wikipedia commons it looks like this: http://commons.wikimedia.org/wiki/File:Remove.svg. If I click on the image I get here: http://upload.wikimedia.org/wikipedia/commons/c/ca/Remove.svg Not too shabby.
There are obviously 34-40 things I do wrong. YohanN7 (talk) 23:19, 19 February 2013 (UTC)
Don't worry, bad image rendering by wiki software happens and is sometimes beyond our control so we just have to upload a new version and hope for the best (again, sometimes...). Had a look a your code in the caption here and used that to upload a temporary png File:Lorentz group commutative diagram.png (the exponentials are made upright because it's a function):
(Is this it?)
so we can at least see what we're doing! Feel free to point out modifications. Again, I'll see to svg conversion soon... (still have to do it for the other diagram). Thanks, M∧Ŝc2ħεИτlk 07:24, 20 February 2013 (UTC)
This is it!
Here Template:Mvar is a finite-dimensional vector space, GL(Template:Mvar) is the set of all invertible linear transformations on Template:Mvar and gl(Template:Mvar) is its Lie algebra. The connected component of the Lorentz group is denoted SO(3;1)+. It has Lie algebra so(3,1). The Template:Mvar and Π are Lie algebra representations and (possibly projective) group representations respectively. Finally, exp:gTemplate:Mvar is the exponential mapping from a Lie algebra to the group Template:Mvar whose Lie albebra is g.
As you might notice, I play around with the {{mvar}}) template and others. I don't know the correct usage yet. It has to do with size, serifs and italics. User Incnis Mrsi seem to have strong views on what is right and wrong, so I use the {{mvar}} and other templates to keep him happy. I don't know, for instance, whether (π, Π), (π, Π), (Template:Mvar,Template:Mvar) or (π,Π) is "correct". One should probably compare to ${\displaystyle (\pi ,}$ ${\displaystyle \Pi )}$ and find the best fit. Italics for capital Π doesn't look good, but it looks ok for π. There is also (π,Π), which is a combination of {{mvar}}) and itaclics using apostrphes that Incnis itroduced in the article.
In actuality, neither LaTeX/[itex], nor {{math}}, {{mvar}} etc. is "more" correct... you might see (and of course are welcome to contribute to) WP:MOSPHYS, written by Incnis Mrsi (of all editors!), I've also been caught in a slight fight with him over this and have used {{math}} in that article for the same reason! M∧Ŝc2ħεИτlk 16:00, 20 February 2013 (UTC)
I'll spend the day on getting Inkscape. But, a question arises: If png is what is working, why bother with svg? How do I make a png? YohanN7 (talk) 12:40, 20 February 2013 (UTC)

Template:OutdentAfter completing the LaTeX file and compiling to pdf,

1. just load the pdf,
2. use the "snapshot" selecting tool (icon looks like a camera, should be under the "tools" menu followed by "select and zoom" at the top),
3. select the portion of the page to capture (in this case the commutative diagram),
4. then right-click and save as a png image,
5. which can be uploaded.

M∧Ŝc2ħεИτlk 16:00, 20 February 2013 (UTC)

Suggested modification: Display SO, and GL upright too with \mathrm{}. The figure caption is controlled from where the image is used, right? YohanN7 (talk) 13:09, 20 February 2013 (UTC)
Here Template:Mvar is a finite-dimensional vector space, GL(Template:Mvar) is the set of all invertible linear transformations on Template:Mvar and gl(Template:Mvar) is its Lie algebra. The connected component of the Lorentz group is denoted SO(3;1)+. It has Lie algebra so(3,1). The Template:Mvar and Π are Lie algebra representations and (possibly projective) group representations respectively. Finally, exp:gTemplate:Mvar is the exponential mapping from a Lie algebra to the group Template:Mvar whose Lie albebra is g.
Please wait with any modifications, because we should possibly include a factor (+/-) in front of Π, otherwise the diagram doesn't commute when Π is projective. YohanN7 (talk) 13:19, 20 February 2013 (UTC)

Ok, now I get most things right. The following line,

\thispagestyle{empty} % No page numbers

is essential in the tex code. Otherwise pdfcrop will yield a tall and narrow thing. Then pdf2svg probably does what it should. The svg it produces behaves wonderfully when rendered directly in Google Chrome, it renders exactly as the cropped pdf in in Adobe Reader X. But, alas, in Inkscape, the fraktur fonts get lost, as well as the arrowheads (they are replaced by y and G). This is precisely what happens after upload too. YohanN7 (talk) 14:46, 20 February 2013 (UTC)

Interesting... I did it without... (well page it doesn't matter if you use the method I indicated above). M∧Ŝc2ħεИτlk 16:00, 20 February 2013 (UTC)
I use Adobe Reader X for pdf. The steps outlined by you above work well, except all I can get is a copy in the clipboard. I can't save it to a file directly from Adobe. If I then paste into Inkscape, it looks ok. But when I save it to file, the same old problem reappears (arrowheads replaced by rubbish and fraktur fonts gone). Murphys law is clearly in operation.
An aside. Here is a pretty damned nice link: http://www.damtp.cam.ac.uk/user/tong/qft.html (There is nothing of immediate relevance to any of the articles we work on.) YohanN7 (talk) 20:58, 20 February 2013 (UTC)
Sorry, I missed out a step: copy and paste the pdf selection to Microsoft word/power point/anything which can hold pictures, from there you should definitely be able to right-click and save as a png. Apologies for that... Thanks for the link also, the QFT articles are relevent to an extent. M∧Ŝc2ħεИτlk 07:28, 21 February 2013 (UTC)
And please don't go about deleting the image now;) It's in the article. YohanN7 (talk) 20:03, 7 March 2013 (UTC)
Not at all, but when did I say to delete? Back to an original point - do you want the SU and GL symbols to be upright? M∧Ŝc2ħεИτlk 20:06, 7 March 2013 (UTC)
(I meant just so you don't accidentally delete it.) Actually, I don't know what's "right". But I think we can leave it for now, because they are in italics in the text as it stands. Many thanks. I guess I'll simply have to learn Incscape some time. Seems daunting because there is just too much one can do with it.
B t w, don't take me too seriously over at Maxwell. I'm just being a little bit philosophical. It will pass. I suppose you are busy with stuff, but have you had a look in the Weinberg books? They are supposed to be read slowly, and the general recommendation seems to be to not use them as a first or even a second book in QFT. But chapters 2-6 are at least human, and the treatment in chapter 5 of general free quantum fields (mass or no mass, general Lorentz rep, and arbitrary spin) is to be found nowhere else. Missing details in derivations can be found in parts 1-3 of that paper we found. Most other books take a preexisting classical field and "quantize" it according to a recipe whose only motivation is that "it works". YohanN7 (talk) 18:03, 8 March 2013 (UTC)
Maxwell's equations is a B+ article, so it's better to make more bad articles into good ones, rather than a few already good into superb A/GA/FA standards ([WP:NOTFINISHED]]).
Yes - busy (and more so over the next few months, end-of term assessments followed by exams after Easter), so haven’t been reading Weinberg's books as much as hoped, but nevertheless they are absolutely excellent and all chapters are "human", just complicated. I wish more time was available to read (they have to be returned in a few days btw)...
Speaking of books, you may like Quantum Mechanics (3rd edn), Eugen Merzbacher, 1998[2][3][4] - this book contains everything on QM (from the basic De Broglie relations to all postulates and pictures of dynamics) and some QFT, definitely worth grabbing if you haven’t it already (I really wish I could afford it...). Similarly for Quantum Mechanics, Claude Cohen-Tannoudji, Bernard Diu, Frank Laloe, 1977 (original), [5][6][7] in 2 volumes (more on QM than QFT, still excellent refs though).
M∧Ŝc2ħεИτlk 23:43, 8 March 2013 (UTC)

## Your equation template

Hey, I find Template:Equation box 1 a bit jarring. First of all, why all that colourful fluff in mathematics? I want to read math, not be distracted by what you may think are the appropriate colours. Second of all, embedding colours like that breaks my custom CSS themes, and is very nonsemantic.

Can we completely remove this template? Thanks. JordiGH (talk) 21:14, 6 March 2013 (UTC)

It has been used all over the place by now, as other editors seem to favour it... And the colours are pale, so what is so distracting? WP has a pale grey/blue theme, pale green is occasionally seen too, such as {{collapsetop}}. Moreover - those colours were not chosen by me: see this edit and those which follow, so it's not just about my preferences. Anyway I don't care if it's deleted, although it's on you if you want to delete it from the articles using it. Best regards! ^_^ M∧Ŝc2ħεИτlk 21:40, 6 March 2013 (UTC)
Taken to WikiProject Physics, where it's used the most. Thanks, M∧Ŝc2ħεИτlk 22:17, 6 March 2013 (UTC)

## BHG talkback

My second reply may help too. --BrownHairedGirl (talk) • (contribs) 17:41, 7 March 2013 (UTC)

## Disambiguation link notification for March 8

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Potential gradient, you added a link pointing to the disambiguation page Displacement (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:04, 8 March 2013 (UTC)

## Spacetime

On random thoughts: If spacetime exists, and if it is possible to model it mathematically as a set, then the easiest and most natural way to model spacetime is as a set of spacetime points called events? I don't think the uncertainty principle applies, since an event is not an observable as far as I can see. But true, the nature of spacetime is questioned, but the replacements are never simpler in any sense. If one abandons point particles (QFT version), then strings looks like the next simplest model. YohanN7 (talk) 11:00, 30 March 2013 (UTC)

I'm probably coming in in the middle of something, and may not respond to everything in this thread as I'm not on WP much at the moment, so please excuse apparent scattiness on my part. As a mathematical model, I see no problem with the model as a set of events, where a local (i.e. differential) metric is defined, and for using this as a domain on which to define fields in QFT. This does not mean that the points (events) exist in any "real" sense, as long as the fields defined on this spacetime behave and interact as the QFT predicts; it certainly should not have a problem with the uncertainty principle. Starting from this, one does of course have to add the whole extra structure of quantum superposition (Hilbert space) not embodied in the set-of-events picture. Spacetime allows one to describe a field by its value at every point in spacetime, which is equivalent to defining it in terms of a basis consisting of Dirac impulses at every point in spacetime. Seen from this perspective, it will be obvious that any other set of basis functions is equivalent, and that other sets (e.g. momentum space) are equivalent in describing a field. I would argue that spacetime is therefore not the most natural, but ranked equally amongst many equivalent domains. The choice of spacetime events as our domain is presumably thus motivated only by our specific way of perceiving the world, extrapolated into a domain that is beyond our perception. This may start coming to pieces when quantum gravity is introduced, though. — Quondum 14:28, 30 March 2013 (UTC)

Don't worry Quondum, anyone is always welcome no matter how random or rarely!! I don't think either that spacetime is an ideal "arena" for dynamics, and neither do some researchers. John Archibald Wheeler is one example who tried to popularize superspace as "dynamic 3-geometries", and even said once "down with points - up with geometries!" or something like that in a paper on the HJEE. Of course there is no problem with spacetime in SR and GR, and the idea of events as points is actually appealing in some way ("there and then"). But for a new way forward possibilities of spacetime singularities should be excluded - shouldn't they? String theory seems to do this by spreading infinities (can't remember of what...) over the length of the string.

Don't get the wrong idea(s)... About the uncertainty principle - I just meant that there is no way to tell the exact position and momentum of particles (etc.) simultaneously, since all "particles" (etc) are always moving. I'm not at all applying the HUP to spacetime itself, nor saying the HUPs are problematic; if anything they must be part of any physical theory incorporating quantum theory.

Although, still disagree with string theory. Always have and always will, unless it really is shown experimentally to be correct description of nature (which would probably take a very long time or be unlikely). M∧Ŝc2ħεИτlk 18:04, 30 March 2013 (UTC)

All that is agreed upon (by the vast majority of researchers) is that QFT is not the final solution, it is only a pretty good approximation to low energy phenomena. String theory is not a QFT, but it is a theory of QM, and of SR, which are both built in at the outset. I have no idea what I believe, but string theory is fun because it is so goofy.
Regarding space/spacetime, are the HJE and the ADM formulation really different theories from anything else (GR)? I can't see from the Wikepedia articles that they change anything fundamental about the nature of spacetime. The ADM formulation assumes that the spacetime manifold is a foliation, which not all manifolds are but I'm only vaguely familiar with these ideas. Basically, it would say that spacetime is a disjoint union of "leaves" which all have a spacelike structure. GR is geometrical either way. When I first read the front page I got the impression that you didn't like the idea of points in either space or spacetime, but now I'm not sure.
Then I disagree slightly with Quondum about any other description (Hilbert space basis) being equally "natural". This is true only within mathematics where any basis would be as good as any other. But even there, there is asymmetry, because it wouldn't be easy to make e.g. the momentum formulation without having the coordinate basis at hand. (d/dx -> d/d(what?)) YohanN7 (talk) 21:56, 30 March 2013 (UTC)
I'll nail my colours to the mast on string theory: Although QFT clearly is incomplete ("not the final solution"), it feels like a partial solution. In contrast, string theory feels wrong on so many levels, IMO merely to "blur" the semi-problematic renormalization that may be an artefact of the choice of the event-based image of spacetime, by entrenching the event-based picture in the very definition of the theory to allow definition of the strings. This is definitely a huge step backwards: it breaks an important symmetry of the theoretical underpinning – the very freedom of choice of basis that I mentioned above. Decades of intense work by physics academia has not managed to bring it to the level of respectability/completeness of QFT, so I feel comfortable calling it unduly complicated and a fringe theory despite its mainstream popularity. On YohanN7's final question: Every linear operator over a basis maps to a linear operator over any other basis. −iℏ{{ safesubst:#invoke:Unsubst||$B=d/dx}} → px· is the well-known operator correspondence when changing from an event basis to a momentum−energy basis. Beware of "easiness": you should be aware of the importance of the concept of basis-independence. This is a fundamentally important and powerful concept in geometry, and no less so in the description of fields over spacetime. To break the symmetry by selecting a preferred basis (e.g. the event basis) would be like denying the fundamental postulate of special relativity. Don't break a symmetry of the theory without any reason just because it "seems simpler". Every basis is just as good as any other. — Quondum 02:37, 31 March 2013 (UTC) I still prefer to think that spacetime have physical existence (whatever that means) while momentum space is a purely mathematical equivalent set of points. Either is a "easy" as the other, but we live and breathe only in the more "natural" spacetime. YohanN7 (talk) 12:17, 31 March 2013 (UTC) True, but "space and time" would be better. M∧Ŝc2ħεИτlk 12:22, 31 March 2013 (UTC) Obviously HJEE and ADM are just alternative formalisms of GR, not theories in themselves because they are GR (as far as I can tell), but they do interpret space and time differently. Again - I don't like the idea of points in space or spacetime (there's not that much difference is there?), for reasons already explained. Yes, QFT is incomplete, and people (Roger Penrose is an example) do think that any form of quantum theory is incomplete, because there are open questions and awkward ideas, compared to SR/GR which appear to be more closed and logical (even though counterintuitive, it's still possible to understand things). I'm not sure what the hype is about Hilbert spaces because they are additional abstract spaces for handling quantum states - not physical spaces themselves. Also a nicer formulation of QM is the phase space formulation, in which case you don't need to take sides with position and momentum operators and representations, using phase space ideas doesn't change any physics but offers new insight. The worst part of string theory is right from the very beginning, is the assertion particles "are" strings etc. How can we ever know that experimentally? We can sit down speculating/guessing/modelling what particles "really are" (only to change again and again anyway in the future) a much as we like: "let's pretend everything, even the "fabric" of space and time, is vibrating strings or springs or trampolines or twirling tops or spinning wheels or pendulums or twiddling knots/links or... then there are a number of fundamental normal modes/rotational frequencies/tensions of these ... and everything in the universe is derived from these fundamental things..." It's weak - just making stuff up just for the sake of explaining physical phenomena, and yet the maths is ludicrously excessively complicated just so it works... CDT is not, at all, like that. Rather it takes a logical approach that spacetime is quantized at small scales and appears as smooth, curved spacetime at large scales, that timelines must agree and casualty is preserved, not just describing, but possibly explaining the very nature of space and time, and the interesting thing is it's automatic fractal nature. Perfectly simple (minimum number of assumptions) and extremely appealing. M∧Ŝc2ħεИτlk 09:09, 31 March 2013 (UTC) The number of abbreviations unknown to me is climbing – which is highlighting my lack of formal study in this area. But I am in agreement, and do not even bother examining string theory much. Equivalent formalisms are equivalent, the only difference being the difference in ease of use in different contexts, and immediacy of insights. The change of basis that I was referring to is however not even a change of formalism: |ψTemplate:Rangle is an abstract vector that can be expressed in components on time–position and energy–momentum bases resp. as |ψTemplate:Rangle = ∫VdVδ(r)ψ(r) = ∫VdVexp(p)ψ(p), the vectors being four-dimensional, and constants discarded. Both these bases are shown in event space, but one reformulate the bases and integrals in momentum space. Or one could use any of an infinite number of other bases – there's nothing special about either position or momentum (or the combination). This suggests that spacetime itself is an emergent phenomenon, possibly inherently anthropocentric. I would argue that QFT is no less complete than and GR: each of them contradicts experiment in some domain, but is complete in its own domain. Okay, so it is "obvious" that QFT is missing something: there are arbitrary choices and unexplained physics, and possibly further forces and particles. GR is flawed in a more obvious way: it has inherent singularities, and also has an arbitrary choice of G. SR does not have such flaws but is weakest, and is incorporated into both the others. An interesting insight from the formulation of a wavefunction as an abstract vector on an arbitrary basis (effectively in any of infinitely many alternate 4-manifolds) is that we can be pretty confident that the superposition principle is exact within a full TOE. The GR picture breaks down under the superposition principle, and QFT breaks down in a GR background. But there must exist a basis for the wavefunction that does not break down, even in the presence of GR singularities within some of the components of a superposition. From this it seems that one should be able to build a QFT TOE in terms of a basis that is not defined over a 4-manifold. The challenge my be how to express the metric tensor mathematically as it applies to such a basis. — Quondum 12:21, 31 March 2013 (UTC) Forgot to mention above to YohanN7, did you mean this: ${\displaystyle \mathbf {\hat {p}} =-i\hbar {\frac {\partial }{\partial \mathbf {r} }}\,\rightleftharpoons \,\mathbf {\hat {r}} =+i\hbar {\frac {\partial }{\partial \mathbf {p} }}\,?}$ Something like that, but I meant nothing really specific. You'll have a hard time even defining classical momentum (or anything else) without space and time at hand. YohanN7 (talk) 19:18, 31 March 2013 (UTC) The idea of classical anything in this context only confuses things: you have to think in functions over some (arbitrary) domain. And then defining momentum becomes easy, e.g. as an impulse in the momentum domain ;-). — Quondum 03:12, 1 April 2013 (UTC) How would you define the momentum domain? (You are not allowed to use the spacetime manifold in the definition.) Classical ideas about spacetime are still around. In QFT spacetime is classical (whether curved or not). String theory modifies spacetime, but it still occupies the central stage, and it is not quantized. I don't think that functions over an arbitrary domain are as "natural" as functions of spacetime. The momentum formulation is perhaps the second most "natural", but this naturality comes about only after the postulates of QM are in place. Quantum Mechanics is not "natural" to most people. It took thousands of brilliant people 100 years to come up with today's version of QM. By contrast, GR was the product of one man (brilliant, but still humanly so). YohanN7 (talk) 15:29, 1 April 2013 (UTC) Yes, but: starting from your more "natural" position of defining spacetime, we run into a dead end. Hence my more abstract (i.e. as yet undefined) domain, while still retaining superposition. In answer to your question about defining the momentum space without defining spacetime: define it as a 4-d affine space, to use as the domain of a wavefunction. It really only works for flat momentum–energy space, but then using spacetime as the space of definition has the same problem (I don't think you can define a momentum operator over curved spacetime, but I have no real knowledge on this). Be careful of your use of "natural" – what is "natural to people" is pretty irrelevant (meaning useless). What is mathematically "natural" is what is relevant here, which is to something that works in the broadest context with the fewest tweaks and conditions. Newton's mechanics are very "natural" (in both senses), but do not apply. Your argument seems to be that we must stick to something that is intuitively natural, even if it has no hope of taking us further, which is to say, throw QFT out of the window. I'd say the correct approach is find where our intuition has misled us, drop that part and proceed. I've identified the flat momentum-or-spacetime domain picture of QFT as broken, selected a part that evidently isn't broken (superposition over a domain – as yet unidentified), and saying let's work with that. (And no, I think it took tens of brilliant people few decades to come up with the basic QFT we have today. Investigating and refining this picture has taken the effort you describe.) — Quondum 22:54, 1 April 2013 (UTC) YohanN7, I can't tell if you're misunderstanding us or I'm misunderstanding you (probably the latter...). • "Momentum domain" as in space of all momentum vectors? Yes, momentum(-energy) spaces or other arbitrary spaces are just mathematical, and we live in space and time so of course it does seem more natural. • The difficulty in casting a physical theory without space and time is pretty obvious, since physics really is geometry one way or another; that's not the subject of discussion. All that's being said (mostly by Quondum and researchers) is that unified space and time ("spacetime") is not be the best way, the most obvious point is that spacetime singularities are problematic (excuse pun). • Notions of space and time can have subtleties no one has thought of, in the process leading to different insights into physics. Only in the past few decades (approx) has fractal geometry been realized to extend well beyond Euclidean geometry, and has revolutionized the way we look at anything from ferns to galaxies. OK - so I contradict myself by supporting CDT which applies spacetime and not just separated space and time, but I also said CDT is a logical approach - not "the best" approach. M∧Ŝc2ħεИτlk 08:16, 2 April 2013 (UTC) Wavefunctions are not the only game in town - quasiprobability distributions in phase space formulation offer new insight. Phase space is nice since one slice of it is space(-time), the other is (energy-)momentum. It's easy to see how the "physical space(-time) we live in" compares with motion occurring in space(-time) (even though (energy-)momenta are elements of (energy-)momentum space). Considering Quondum's quote: "This suggests that spacetime itself is an emergent phenomenon, possibly inherently anthropocentric." have you seen the biocentrism (theory of everything) article? M∧Ŝc2ħεИτlk 12:38, 31 March 2013 (UTC) No, I hadn't seen that article. Nor do I feel that it relates in any real sense (it feels fringe); the anthropic principle is as close as I was getting to that. Relating to Yohan's comment a bit above: if momentum and space operators can be shown to be entirely symmetric, then there should be an equivalent concept a locality (a metric tensor) in momentum space, and there should be creatures living in that space that perceive our momentum dimension as a physical space dimension and vice versa. Which would make momentum–energy space every bit as physical as spacetime; we are then merely disadvantaged by our perspective. — Quondum 13:06, 31 March 2013 (UTC) Interesting... I'm sure there is a phase space formulation in GR which is basically what you're saying. In the ADM formalism, the metric g and the momentum π conjugate to the metric are analogues of the generalized coordinates q and momenta p in analytical mechanics. Don't take that biology article too seriously - just thought to point it out. I think the ideas are of interest, but yes all natural sciences are emergent from physics. M∧Ŝc2ħεИτlk 15:49, 31 March 2013 (UTC) Nope, GR in any formulation is not what I'm saying. Different formulations may be mathematically equivalent. What I'm proposing is inequivalent to either GR or QFT, even though it shares a lot with QFT. I cannot really comment on the ADM formalism, and do not know whether it is essentially equivalent to GR, and whether it could easily accommodate QFT superposition, though this does not seem to be the objective from the article. — Quondum 03:12, 1 April 2013 (UTC) OK I may have been carried away thinking immediately of GR as soon as you said "metric"... M∧Ŝc2ħεИτlk 07:42, 1 April 2013 (UTC) Yes, that's the challenge of a TOE: finding a way of formulating it so that it maintains QFT's superposition, an at the same time accommodates GR's metric in some sense within each component of a superposition, all the while being mathematically consistent. So it should generate GR (essentially exactly) in the classical limit. And it should produce QFT in the low density limit. But the picture of two black holes in offset locations linearly superposed on each other suggests that the spacetime is not a great way to frame the problem: we do not know how to form the superposition. So essentially what I'm doing is to frame the whole thing in a way that does not rely on any definition of spacetime, but relies only on that superposition of fields appears to be exact. And in the process I'm saying: at a glance it looks as though the maths of wavefunctions looks exactly the same in momentum–energy space and in spacetime; is this in fact a true symmetry of the physics? Possibly merely untrained speculations on my part. The spaces do seem to be true duals (both are 4-d affine spaces containing complex fields, wavefunctions identically described over each, and fields on each translate via Fourier transform). But I do not know how to describe multiple particles of different rest masses, and how their equations of motion (the Dirac equation of fermions, for example) look, how the rest masses and charges translate, and how the EM field translates. Interacting particles must have overlapping wavefunctions in spacetime – is locality in momentum space the same? This would in principle be a pretty elementary investigation. — Quondum 14:52, 1 April 2013 (UTC) "I would argue that QFT is no less complete than and GR: each of them contradicts experiment in some domain, but is complete in its own domain." Has any one of them been contradicted in experiment? Surely not, it would have been a sensation. They may contradict expectations, but that's another thing. Then the question about fields and superposition. The physical existence of fields is not certain at all. In QFT, no assumption about existence of fields is actually needed. What is needed is a set of abstract states. Wave functions of particles are under doubt. None of them are actually measurable. (You could argue that the EM fields are measurable, but what is actually measured lies closer to the classical EM fields) The abundance of bases supports the thought that no QFT field has physical existence. The existence of spacetime is under less doubt, as well as that of particles in one way or another. The QFT fields describe point particles, even though the "wave functions" make them appear to be spread out. In sorts, QFT then has built in "singularities". I'd have to defend string theory a bit here. If particles are not 0-dimensional, then the next simplest thing would be that they are 1-dimensional. It doesn't stop there, since the theory allows for structures of any dimension. One probably needs to be a little bit careful about the superposition principle too. Mathematics is one thing, and physically realizable states is another thing. For instance, it is commonly believed that a superposition of a half integer spin state and an integer spin state does not exist in nature. Taking thoughts like this further, mathematically "possible" things might not be realized in nature. In short, I'm unsure about the physical existence of any field; if they do exist physically, then they certainly don't exist physically in every possible mathematical way. Equations of motion for multiple interacting particles are difficult and quite useless. The trouble is that there really is no such thing as an interacting n-particle state. The best explored example is probably the QFT treatment (beyond Dirac) of the hydrogen atom, but here too one makes plenty of approximations. (Chapter 13 in Weinberg if you still have it Maschen). YohanN7 (talk) 09:45, 2 April 2013 (UTC) I'm afraid I take a virtually diametrically opposite position. This sounds like arguing the de Broglie–Bohm theory versus the many-worlds interpretation. How can you say the theories have not been experimentally contradicted? It is well-known that they have limited domains of applicability: QFT and GR are mutually incompatible on theoretical grounds. GR's spectacular experimental success on macroscopic scales is a sufficient contradiction of QFT, and vice versa. It is only in the low-density, large scale limit that they are compatible (where neighbourhoods of spacetime can be treated as a flat background to QFT). How the "states" of QFT that you mention evolve does require (AFAICT) those states to be treated as wavefunctions in some multidimensional "space", so they cannot simply be abstracted to states. In the solution to the Schrödinger equation, the evolution operator is determined by the Hamiltonian operator, which depends directly upon the background "space", whichever (by way of illustration only: the Schrödinger equation is non-relativistic and hence this treatment also has a limited domain of applicability). My point remains: without fields, the mathematical guts of QFT is ripped out. Beware of saying that particles are pointlike in QFT: this is presumably an artefact of the maths used (in particular, Feynman's formalism), no different from saying that a square pulse has infinite extent because each of its components under a Fourier transform are infinite in extent. "Wave functions of particles are under doubt"? Just because they do not behave classically, does not put them in doubt – be careful of what you mean by "physical existence". I start from the perspective that there seems to be some form of reality out there, and that we can try to characterize it, not that it has to conform beyond the domain of my experience to the concepts of reality that I have built up through personal experience. The superposition of a half-integer and whole integer spin (of the same particle/field) is not possible in QFT with the permitted fields, so I'm not sure what you're getting at here. — Quondum 10:32, 2 April 2013 (UTC) Last point first. Superposition of any states is possible in QFT. Some superpositions just aren't physically realizable. There is no formal "rule" prohibiting mixes of integer and half integer states, it is just thought that it is impossible to prepare such a state in an experiment. Wave functions play a very little role in QFT. The ones that enter are free field wave functions in the approximation of no interactions. (Typically tensor products of exponentials times a coefficient vector with appropriate LT transformation properties.) Particles are pointlike in QFT, this is standard terminology. They are quantized classical point particles. Steven Weinberg (The Quantum theory of Fields) develops QFT from states. Using Lorentz invariance and causality (here is where spacetime enters), the free field equations (and their solutions) appear only as byproducts. Interactions are handled abstractly as well. The scattering matrix really only involves free trivial wave functions following from prescribed Lorentz transformation properties of the asymptotical fields. There is never even an attempt to follow the detailed course of events (or time evolution of a wave function) unless this is done in a classical approximation (potential function i.e. classical field). The approach to QFT through wave functions is historical (Dirac had a huge influence), but what the fundamental order of nature is (are fields or particles - of some dimension and extent - the fundamental entities), is simply not known. The difference to me is that particles feel more certain to exist. It is another matter that fields describe nature very accurately. So does particles. An example: the Dirac wave function of an electron seems to be in no way measurable. How can we assign physical existence to something that is not measurable? My standpoint is that I simply don't know. Can you name one experiment contradicting QFT or GR? They are believed to have limited domain of applicability, I believe that too, but there is no experimental evidence where any one of them break down. Moreover, QFT and GR have historically been though of as incompatible due to the fact that GR cannot be quantized as a renormalizable theory. These lines of thought are being reversed a bit. Nonrenormalizability does not automatically "destroy" a theory any longer. YohanN7 (talk) 12:07, 2 April 2013 (UTC) I should mention that states and the particle content building up these states are defined as (elements of) representation spaces of the Lorentz group. Also, there are field equations, at least if there is a known Lagrangian density (which isn't a prerequisite for a QFT), but these are pretty much of no use if there are interacting particles. One wouldn't be able to tell how many of them there are, even given initial conditions. The true dynamics lies in the field operators. YohanN7 (talk) 20:10, 2 April 2013 (UTC) When you speak of states, you are moving outside my area of familiarity (I can only interpret that in which there are clearcut eigenstates - like in the electron orbitals in an atom). I think of QFT as being phrased in the Dirac tradition: as fields in spacetime, extended via the Hilbert space concept, and excitations allow n-particle states. I really only think of the Dirac electron wavefunction, which satisfies a multi-dimensional differential equation that includes a coupling with the electromagnetic field, which simultaneously satisfies a similar equation. Ignoring for the moment the concept of a wavefunction collapse, there is nothing in this picture that corresponds to a point particle. This picture is (so I've been led to believe) mathematically equivalent to the Feynman and other formalisms. Ergo, the concept of point particles is a mathematical convenience, not a reality. You can't truly have a point particle: its energy would be infinite. Terminology be damned. That does not mean the particles are pointlike, only (I guess) that you can treat a particle as a mathematical superposition of an infinite number of Dirac impulses. Yet an isolated Dirac impulse cannot occur, not even mathematically as a normalizable wavefunction. I really do not see how you can say particles (interpreted as isolated point particles) describe nature at all. The electron wavefunction has spin ±{{ safesubst:#invoke:Unsubst||$B=1/2}} per excitation level, and the photon field has spin ±1 per excitation level. Thus, you never can get a single-excitation electron field in a spin-1 state. You can of course get a superposition of 1-exitation (one "particle") and a 2-exitation (2-"particle") electron fields – nothing wrong with that.
Think in the spacetime wave paradigm in Hilbert space. Produce a Schrödinger's black hole (in lieu of his cat): a particle reflects off or passes through a semi-silvered mirror, passing either side of a pico black hole, imparting a tiny jolt of momentum through gravitational interaction. The two components of the black hole drift apart over time, creating incompatible spacetime geometries for the superposition. QFT broken, at least inasmuch as it does not describe this superposition. The fact that GR can be shown to produce curves spacetime is sufficient to experimentally invalidate (and hence falsify) QFT.
Sorry, I know most of what you've said is over my head, but the bits about QFT that I've internalized seem to contradict your conclusions. — Quondum 23:30, 2 April 2013 (UTC)
The Dirac equation as originally thought of by Dirac is essentially a relativistic version of the Scrödinger equation, i.e. it's RQM, not QFT, so in this sense it is a classical theory. In addition to n-paricle states you need the number of particles to be nonconstant. This forces the introduction of creation and annihilation operators. Out of these the QFT fields are built up. They are thus operators on the set of states in the QM Hilbert space (Fock space). In the old (now deprecated) terminology, one speaks of second quantization because the Scrödinger and Dirac fields, like the EM field, are "quantized once again". In fullblown QFT, the coupling of the electron must be done at the level of operators, not fields.
Quantum mechanical point particles are characterized by not having internal structure. The electron is certainly believed to be a point particle in this sense. A nucleus is composed of constituents (protons, neutrons) in turn composed of structureless particles (quarks). In this sense the elementary particles are believed to be physical point particles.
What you do in your spin calculation is that you are taking tensor products (a⊗b) and using a Clebsch-Gordan decomposition (perhaps without realizing it) to find a new basis in which the spin operator is diagonal. There is no rule against simply adding two state vectors (after all, it is a vector space), and normalizing the result to unity. This would be a state with spin mixes.
I am not sure what you mean by the experiments contradicting QFT or GR. As "experiments" I count experiments in a laboratory, or astronomical observations, I wouldn't count experiments of thought, good as they may be. Of course, QFT assuming Minkowski spacetime should break down in extremely curved spacetime, but this has not been observed experimentally. Then there is QFT in curved spacetime to perhaps take care of this. The same goes for two black holes drifting apart. YohanN7 (talk) 09:09, 3 April 2013 (UTC)

I'm copying this to User talk:Quondum/sandbox/Spacetime, since I think it is voluminous enough to merit its own subpage, and is cluttering Maschen's talk page unduly. Everyone is welcome to contribute to the discussion there, and I'll leave Maschen to collapse/remove/archive/ignore this section. — Quondum 01:08, 4 April 2013 (UTC)

I've been following this thread with interest - Quondum and YohanN7 are making interesting points, just a lot to follow very quickly (and I really shouldn't be procrastinating on WP due to exams in a month...) ... Thanks, M∧Ŝc2ħεИτlk 06:47, 4 April 2013 (UTC)

## Disambiguation link notification for April 5

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Wave function, you added a link pointing to the disambiguation page Probability density (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 09:24, 5 April 2013 (UTC)

## Review required

Hi!

How are you? Hope fine... Well, I was reading the article Lists of integrals#Absolute value functions where I found a formula for ${\displaystyle \int |\sin ax|\,dx}$ which gave wrong results... I added a correct one, but no one has yet bothered to have a look and get the old one removed. I did not do it myself for the fear that I might myself be wrong in judging the old one’s correctness. Would you please look into it and make the necessary corrections? I would be very grateful.

Sγεd Шαмɪq Aнмεd Hαsнмɪ (тαʟк) 17:41, 7 April 2013 (UTC)

Hi, yes. The formula you added looks pretty complicated, where did you get it from? Ideally it should come from a reliable secondary source and not just be derived out of an editors head! Wolfram alpha gives [8]:

${\displaystyle \int |\sin(ax)|dx=-{\frac {1}{a}}\cos(ax)\operatorname {sgn}[\sin(ax)]+C}$

for real x.

For future reference, you can also ask for reviewing edits on the talk page in a new section (this applies to any article).

P.S. I changed your last div tag from <div/> to </div> to correct it. Best, M∧Ŝc2ħεИτlk 18:53, 7 April 2013 (UTC)

Yes, that’s fine, but it doesn’t give the right result... What about that? — Preceding unsigned comment added by Syed Wamiq Ahmed Hashmi (talkcontribs) 19:39, 7 April 2013 (UTC)

Sorry, I had to do something back then to now. Checking only on the basis of a = 1, and integrating over n half-periods [0, ], the formula above and the old one in the article are indeed wrong (each resulting in zero), while the one you added gives 2n which seems correct since ∫0πdxsinx = 2. I went ahead and removed the old incorrect formula, nevertheless I expect you have a source for your equation? If so please add one to the reference section. Thanks. M∧Ŝc2ħεИτlk 21:49, 7 April 2013 (UTC)

I’m afraid I did not find the result I gave, anywhere, either in books (though I have just basic calculus texbooks with me) or on the internet... I came up with one derived from my head itself! Once I came up with one and verified it extensively, I thought it suitable to put up in an article on integrals, when the correct result is up nowhere on the internet. I know there must be a reference to eveything in an article, but when I found none existing, I took this step... Hope this is O.K.

Nevertheless, thanks a lot for the invaluable help! If you find the formula unsuitable to qualify to get into an article, you might remove that as well.

Sγεd Шαмɪq Aнмεd Hαsнмɪ (тαʟк) 05:50, 8 April 2013 (UTC)

No problem. M∧Ŝc2ħεИτlk 06:26, 8 April 2013 (UTC)

Hi again!

New days bring new problems... On that same old page on integrals, this time I saw a flaw in the the formula for ${\displaystyle \int |\cos ax|\,dx}$ and I think this problem exists for all the other absolute-valued trigonometric integrands. I wonder the person who entered them must not have verified them before entering them into that article... What do you think should be done? The correct integrals are nowhere to be found and it would take a lot of time deriving the actual complicated anti-derivative for each one of them. And till then would we have to keep the old wrong ones still there? Something must be done but I do not know what to do!

Sγεd Шαмɪq Aнмεd Hαsнмɪ (тαʟк) 13:30, 9 April 2013 (UTC)

We should take it to Wikipedia talk:WikiProject Mathematics in a new section and the talk page of that article, as mentioned above.
Please don't insert formulae that you derive, this counts as original research and is not allowed on WP for verifiability reasons. You'll find that if you do this, people will revert your edits with an edit summary like "removed addition of unsourced/dubious content - see talk". (Don't panic, I'll not revert your recent correction). Here is one example from a recent user. A couple of years ago when I started on WP, I learned the hard way by randomly deriving formulae then inserting them (admittedly some of my results were correct but uninteresting, the rest completely wrong...) M∧Ŝc2ħεИτlk 14:57, 9 April 2013 (UTC)
Above Wolfram Alpha formula and the formula in the article for |cos(ax)| are both wrong: a formula for the integral of a continuous function should be continuous, which is not the case of these formulas. D.Lazard (talk) 15:36, 9 April 2013 (UTC)
Yes they are wrong, which is why I notified the WikiProject and talk page. Should we just comment out the dubious section until sources are found? Thanks, M∧Ŝc2ħεИτlk 15:40, 9 April 2013 (UTC)
Thanks for your reassurance... But what next? What would be done? 19:39, 9 April 2013 (UTC)
Simple: look for references, or remove them. For now I tagged it as "{{dubious}}" for people to see. M∧Ŝc2ħεИτlk 19:48, 9 April 2013 (UTC)

Well, thanks a lot. It has been so good having great seniors like you, who taught me many new things in my newly started phase of a Wikipedian... Thanks once again for your encouraging, tolerant attitide; and sorry for bothering you so much, almost pulling you into a problem to which you were so unrelated and interrupting your exam preparations :( ... I will try my best not to bother you again for the month till your exams are over. Best of luck for your exams and please pray for mine, too, for which I, too, have just a month to go. Bye!

Regards

21:07, 9 April 2013 (UTC)

No problems at all, and good luck to you too! M∧Ŝc2ħεИτlk 23:19, 9 April 2013 (UTC)

## Templates braket, bra, ket and bra-ket

I like your template {{braket}}. I've a few suggestions, if you're feeling energetic:

• I have changed {{ket}}. You'll see why under the edit summary. Feel free to panelbeat, especially the doc page.
• I have defined {{bra}} similarly. I am not familiar with the terminology (called it a covector). I'd appreciate your corrections.
• Perhaps change the first parameter of {{braket}} from "braket" to "bra-ket" (as in {{braket|bra-ket|φ|ψ}}). This allows the creation of the shorthand template {{bra-ket}} in analogy with {{bra}} and {{ket}}.
• A challenge/request: allow an optional additional parameter in {{braket}} so that {{braket|bra-ket|φ|H|ψ}} displays Template:Langleφ|H|ψTemplate:Rangle. This is beyond my current knowledge of templates.

Quondum 02:15, 15 April 2013 (UTC)

In order of the points:
• OK, I anticipated and was going to create this that some time ago, but didn't since...
• I was sure there was a template called "bra" for something completely different and didn't want to overwrite, which is the reason for the braket template storing bras, kets, brakets and ketbras.
• I'll do the renaming.
• Also, I tried at one point to incorporate a parameter in braket for operators: Template:Langleφ|H|ψTemplate:Rangle, but is this really needed? It would be much simpler to just use the bra and ket templates separately (you could argue that for the "ketbra" parameter in the original braket template I suppose... I would delete that parameter actually) One less template name, less template coding... Just my opinion.
M∧Ŝc2ħεИτlk 05:39, 15 April 2013 (UTC)
Thanks for the fixes; it's very neat now. {{bra}} was deleted a while ago, it seems the reason being it was as a deprecated synonym for {{BRA}}. I agree ketbra is perhaps going unnecessarily far, and the only reason for bra|H|ket would be laziness, so ignore my request. The existing bra-ket has value because the notation cannot be built from bra and ket.
On a tangent: is it true that {{ safesubst:#invoke:Unsubst||$B=d/dt}}Template:Braket = ĤTemplate:Braket ↔ −Template:Braket{{ safesubst:#invoke:Unsubst||$B=d/dt}} = Template:BraketĤ? I'd expect the second time-derivative to have to be explicitly left-acting for it to be true in general, even if Ĥ is defined as left-acting (is it?). — Quondum 11:34, 15 April 2013 (UTC)
No problem. About operators: yes the equations are correct. It's easier to take everything as right-acting. Taking the Hermitian conjugate reverses the "handedness-action" of the operator: i.e. all operators become left-acting (time derivative and Hamiltonian).
In general for a state Template:Ket and operator Ω, representing the operator Ω as a square matrix and Template:Ket as a column vector, just another state, a column vector, so the Hermitian conjugates are Template:Bra, a row vector, and is another row vector. So the derivative operator in the SE above is now left-acting: {{ safesubst:#invoke:Unsubst||$B=d/dt}} then Template:Braket, similarly Ĥ then Template:Bra. For completeness here are the manipulations: ${\displaystyle {\hat {\Omega }}|\psi \rangle =\left(\sum _{i,j}\Omega _{ij}|i\rangle \langle j|\right)|\psi \rangle =\sum _{ij}\Omega _{ij}|i\rangle \langle j|\psi \rangle }$ so taking the Hermitian conjugate: ${\displaystyle \sum _{i,j}\Omega _{ji}^{*}\langle i|\langle \psi |j\rangle =\sum _{i,j}\Omega _{ji}^{*}\langle \psi |j\rangle \langle i|=\langle \psi |\left(\sum _{i,j}\Omega _{ji}^{*}|j\rangle \langle i|\right)=\langle \psi |{\hat {\Omega }}^{\dagger }}$ The completeness condition for an orthonormal basis was used: ${\displaystyle {\hat {I}}=\sum _{i}|i\rangle \langle i|\quad \Rightarrow \quad {\hat {\Omega }}={\hat {I}}{\hat {\Omega }}{\hat {I}}=\left(\sum _{i}|i\rangle \langle i|\right){\hat {\Omega }}\left(\sum _{j}|j\rangle \langle j|\right)=\sum _{i,j}|i\rangle \langle i|{\hat {\Omega }}|j\rangle \langle j|}$ and so the operator can be represented as a matrix: the are matrix elements, and the outer products are basis matrices. Of course, this is only for discrete bases. For continuous bases the sums are integrals and are not written as row/column/square matrices, just as (multiple) integrals. Hope this helps, best, M∧Ŝc2ħεИτlk 14:35, 15 April 2013 (UTC) I'm not sure I agree with this. In the context of the Schrödinger equation, I see time as an independent parameter selecting the state function over another domain (e.g. space or momentum), hence the time derivative is not expressible as a matrix operator (even infinite-dimensional). I will accept your argument where the state is regarded as vector over the Hilbert space of spacetime, as with RQM or QFT, but even then it remains to be shown that ${\displaystyle {\hat {\tfrac {d}{dt}}}^{\dagger }={\hat {\tfrac {d}{dt}}}}$, i.e. that the time derivative is a Hermitian operator. I'll accept that it is if you say so, but that leaves my (somewhat fuzzy) objection that in the QM formulation this does not apply (i.e. the time derivative cannot be represented as a matrix indexed by space or momentum). — Quondum 16:44, 15 April 2013 (UTC) My response was over the top and didn't need matrices to answer the question, just for background/context... The time derivative is just the operator it is, not a matrix, and yes in all quantum theory time is a parameter and not an operator (aside from the time-ordering symbol, which is probably close as it gets). But the idea of quantum states as row/column vectors still does apply: all it means is that when you project another state Template:Ket onto the equation, the time derivative acts on that state, resulting in the scalar Template:Bra{{ safesubst:#invoke:Unsubst||$B=d/dt}}Template:Ket = Template:BraĤTemplate:Ket. Back to the original point, the operators are right-acting.
Initially in {{ safesubst:#invoke:Unsubst||$B=d/dt}}Template:Ket = ĤTemplate:Ket operators are right-acting, then taking the conjugate Template:Bra{{ safesubst:#invoke:Unsubst||$B=d/dt}} = Template:BraĤ all operators are left-acting. My usual absent-minded stupidity...
Aside: In the phase space formulation there are left-handed and right-handed derivatives (although wrt x and p, not t). M∧Ŝc2ħεИτlk 17:13, 15 April 2013 (UTC)

And about the terminology: kets are vectors and bras are dual vectors (you knew that). Sometimes the term "ket vector" is used synonymously for "ket", but I have never seen "bra covector", even though (deja vu!) covector = dual vector = one-form, and in this case bras are convectors in other terminology. Just stick with "bra" and "ket", and think of the connection to Euclidean vectors via row and column vectors. M∧Ŝc2ħεИτlk 14:51, 15 April 2013 (UTC)

## A barnstar for you

 The Original Barnstar For taking the idea of a relativistic quantum mechanics article on a talk page and developing it in short order into a very nice article. Well done! Mark viking (talk) 16:14, 16 April 2013 (UTC)

I really appreciate your generous feedback... Although, it took a week of intense editing just to get it to where it is now (link for future reference), as well as other editors fixing my bad prose/linking/formatting (looking at it now I can see formatting inconsistencies already), and the article still has loose ends (Lagrangian densities, helicity...) and has yet to actually expose any substance to the reader (a list is prepared on the talk page). Best regards! M∧Ŝc2ħεИτlk 17:32, 16 April 2013 (UTC)

## Disambiguation link notification for April 17

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that you've added some links pointing to disambiguation pages. Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.
Relativistic quantum mechanics (check to confirm | fix with Dab solver)
added links pointing to Heisenberg, Spectrometry and Davisson
Matrix representation of Maxwell's equations (check to confirm | fix with Dab solver)
Spin magnetic moment (check to confirm | fix with Dab solver)
added a link pointing to Wave mechanics

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 01:48, 17 April 2013 (UTC)

## Template:Math

Hi,

I was off late trying to use the wiki template:Math Even opening the template page ends up with the template loop detected (since there are examples within the math page) Is there are a way to copy the relevant templates for the Math part alone? Any help would be great

K.Venkataraman — Preceding unsigned comment added by Venkak2 (talkcontribs) 00:04, 18 April 2013 (UTC)

It shouldn't. It never opens up with loop errors on my computer. Do you see one now: 2s + 1 ? I guess loop errors may be due to not closing one template with the end brackets }}, with other templates following after, like this:
{{math|... {{math|...}} .... {{math|...}} {{math|...
There's not much I can help with otherwise, sorry. You might ask at Wikipedia talk:WikiProject Mathematics or the "Wikipedia:Village pump". Best, M∧Ŝc2ħεИτlk 06:14, 18 April 2013 (UTC)

## Disambiguation link notification for April 25

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Relativistic mechanics, you added links pointing to the disambiguation pages Dynamics and Observer (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 00:42, 25 April 2013 (UTC)

## Making diagrams

Hi! How have you been? Recently I wished I could make nice vector diagrams. My knowledge of utilities to produce such things (and computer algebra systems in general) is woefully inadequate. I've always just sketched things freehand, or cobbled them together in something like paint :( Any helpful suggestions? Thanks! Rschwieb (talk) 18:36, 30 April 2013 (UTC)

Hey! Recently busy with exams, which continue until 31/05/2013... For SVG graphics:
• The program I use is Serif DrawPlus X4 (website), although it's quite expensive (bought it a couple of years ago for about £50.00...) it's extremely reliable, robust, easy to use, very versatile, you can even produce basic animations like a cartoon (not that I've done this much, but it's possible)...
• A free program, but incredibly tedious and quirky to use, is Inkscape, here's the website.
As for others not strictly SVG, but still of some interest, are the following (I haven't come around to use them much but they're very powerful).
Enjoy looking into these! Regards, M∧Ŝc2ħεИτlk 19:41, 30 April 2013 (UTC)
Thanks for the leads... I'll try some out. Good luck on exams! Rschwieb (talk) 10:26, 1 May 2013 (UTC)

## class=C for a redirect

Could you explain, what for did you place this {{math rating}} onto the talk page of a redirect? Incnis Mrsi (talk) 17:22, 7 May 2013 (UTC)

Absent mindedness... I reverted. M∧Ŝc2ħεИτlk 17:25, 7 May 2013 (UTC)

## Premature closing of MathSci's RfE against D.Lazard by Future Perfect at Sunrise?

I wish to notify you of a discussion that you were involved in.[9] Thanks. A Quest For Knowledge (talk) 15:42, 22 May 2013 (UTC)

Thanks for letting me know. Best, M∧Ŝc2ħεИτlk 15:11, 23 May 2013 (UTC)

## Disambiguation link notification for May 28

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that you've added some links pointing to disambiguation pages. Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.
Anyon (check to confirm | fix with Dab solver)
Relativistic mechanics (check to confirm | fix with Dab solver)

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 15:05, 28 May 2013 (UTC)

## Hope you don't mind

... this. Nice work. Cheers - DVdm (talk) 09:48, 4 June 2013 (UTC)

No worries, feel free, but I don't like the image much myself, and really drew it for someone else (ultimately with the incorrect interpretation from here). Not sure and don't care if it will be used much. Thanks, M∧Ŝc2ħεИτlk 23:33, 4 June 2013 (UTC)

## Symmetry in quantum mehanics

A worthwhile article all around. Use the singular name, not the plural. Ostensibly link to it from symmetry (physics). It's better to start an incomplete article in the main space so that everyone can view and edit it than to hide it away for a long time in user space where only a few people will see it. After all, Wikipedia is a work in progress. Teply (talk) 23:24, 4 June 2013 (UTC)

Thanks, I intend to tighten bits up now before moving it to mainspace, currently there are obvious errors and more bits I need to add first. M∧Ŝc2ħεИτlk 23:33, 4 June 2013 (UTC)

## Disambiguation link notification for June 5

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that you've added some links pointing to disambiguation pages. Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.
Symmetry in quantum mechanics (check to confirm | fix with Dab solver)
added links pointing to Parity, Invariance and Time reversal
Relativistic angular momentum (check to confirm | fix with Dab solver)

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:33, 5 June 2013 (UTC)

## question about the diagram of circular and hyperbolic angle

There might be a mistake in the diagram of circular and hyperbolic angle: File:Circular and hyperbolic angle.svg.

there might be a mistake in the diagram

Would you like to join the discussion on Talk:Hyperbolic_angle? Thanks.

Armeria wiki (talk) 03:22, 9 June 2013 (UTC)

Thanks for letting me know, best. M∧Ŝc2ħεИτlk 07:16, 9 June 2013 (UTC)

## WP:MOSPHYS

Hi, nice to see you started working on it again. I'm busy with another project atm, but I plan to come back and look at it with fresh eyes too at some point, some time... — HHHIPPO 06:47, 12 June 2013 (UTC)

Just updating on a few things. Thanks for your message. M∧Ŝc2ħεИτlk 06:49, 12 June 2013 (UTC)

## Disambiguation link notification for June 12

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Relativistic angular momentum, you added a link pointing to the disambiguation page Stress tensor (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 12:08, 12 June 2013 (UTC)

## Covariant formulation of classical electromagnetism

Hey I saw you changed the tensor expression the inhomogenous Maxwell equation earlier today, claiming the indices were wrong. However, it was originally correct as the article clearly states that it uses a +--- signature. On page 557 on Jackson, 3rd edition, equation (11.141) states that

${\displaystyle \partial _{\alpha }F^{\alpha \beta }={\frac {4\pi }{c}}J^{\beta }}$

where Jackson is using Guassian units and +--- signature. It's very obvious that this is the way the equation is supposed to be written, given that alpha comes before beta in the Greek alphabet, so don't change it again unless you wanna start some problems... Fastman99 (talk) 02:17, 17 June 2013 (UTC)

## Disambiguation link notification for June 19

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Angular momentum diagrams (quantum mechanics), you added a link pointing to the disambiguation page Time reversal (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:05, 19 June 2013 (UTC)

## Disambiguation link notification for June 26

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Symmetry in quantum mechanics, you added a link pointing to the disambiguation page Vector (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:21, 26 June 2013 (UTC)

## Disambiguation link notification for July 3

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Function of several real variables, you added links pointing to the disambiguation pages Unit and Scalar quantity (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 10:50, 3 July 2013 (UTC)

## Energy–momentum_relation - LaTeX for lead equation

hi Maschen, please check out my talk entry Talk:Energy–momentum_relation#LaTeX for lead equation on this. --cheers, DavRosen (talk) 03:06, 4 July 2013 (UTC)

## Function of several real variables: Sections structure

I think that the section "Limits and continuity" should be merged into in the section "Continuity". Could you do that?

On my side, my next project is to write a section "Differentiability". This section is very important, as being the foundation of calculus. It could be based on Differentiable function#Differentiability in higher dimension. For the moment I'll simply copy this section here, but expansion and comments should be added to explain that differentiability means that the function may be well approximated by a linear function, that the linear form which appear in that definition is called the differential and denoted df, that the variables in it are traditionally denoted dxi. The interpretation (in particular in physics) of d as meaning a small variation deserve also to be explained. Also, the coefficients of this linear form are the partial derivatives, and they define a vector valued function (also called vector field), which is named the gradient of the function.

By the way, I see several issues in the present structure of the article:

• Multivariate calculus is the study of the functions of several real variables. This makes irrelevant, or, at least, very strange, to have sections called "Calculus with several real variables" and "Multivariable calculus".
• The sections "Examples" and "Symmetric functions" provide examples that appear to be less illustrative than the other examples that appear before them in the article. I incline to suppress them.
• I do not understand the aim of the section "composite function". On the other hand, an accurate definition of the operation of composition for the functions of several real variables would be useful.
• IMO, the section "Implicit function" should be rewritten to expand the following summary: the implicit function theorem allows to define locally a differentiable function of n variables from the equation f=0, where f is a differentiable function of n+1 variables.

I'll leave for vacations at the beginning of next week until end of July, and will be almost away from the net during this period.

Agree with blending limits into continuity section (didn't merge yet in case you wanted to do something else with it). Will have a look at the other edits/rewriting also. But the examples in the "examples" section are supposed to relate geometry and analysis, I don't see why they're "less illustrative"... M∧Ŝc2ħεИτlk 08:39, 6 July 2013 (UTC)

## Statement in Dot product

This edit adds the statement "Properties 3 and 4 follow from 1 and 2." This, as I understand it, is a false statement when the scalars are the real numbers (but true if they are rationals). 2 follows from 3, and 4 follows from 1 and 3, but that is probably as far as one can take it. — Quondum 02:22, 6 July 2013 (UTC)

Prior to that edit it said "these last two properties follow from the first two", which was translated to "3 and 4 (last two) follow from 1 and 2 (first two)". So the statement is correct but vague:

1. Commutativity in 1 follows from the definition.
2. Distributivity in 2
${\displaystyle \mathbf {a} \cdot (\mathbf {b} +\mathbf {c} )=\mathbf {a} \cdot \mathbf {b} +\mathbf {a} \cdot \mathbf {c} }$
is easiest to derive geometrically.
3. 3 follows from 2 by replacing b by rb, as well as using 1 to switch
${\displaystyle \mathbf {a} \cdot (r\mathbf {b} +\mathbf {c} )=r(\mathbf {a} \cdot \mathbf {b} )+\mathbf {a} \cdot \mathbf {c} }$
into
${\displaystyle r(\mathbf {b} \cdot \mathbf {a} )+\mathbf {b} \cdot \mathbf {a} =(r\mathbf {b} +\mathbf {c} )\cdot \mathbf {a} }$
to complete bilinearity (linear in both arguments).
4. 4 follows from 1 by replacing a by c1a and b by c2b. You don't need 3 for 4.

Of course, I could have been clearer at the time. M∧Ŝc2ħεИτlk 08:39, 6 July 2013 (UTC)

I apologize: I missed the prior version of the statement, so you merely reworded what was already incorrect. It is actually quite an interesting point that an additive map is not necessarily also linear, which your reasoning above assumes when moving the r from inside the dot product to outside the dot product. The first two properties do not even use scalar multiplication, so neither 3 nor 4 can possibly follow from only the first two. Simple counterexamples to the implication of linearity from the distributive property exist. I was quite surprised when I first learned this. — Quondum 11:27, 6 July 2013 (UTC)
The fact that scalars can be moved out the dot product is simply a trivial fact from the definition:
${\displaystyle (c_{1}\mathbf {a} )\cdot (c_{2}\mathbf {b} )=|c_{1}\mathbf {a} ||c_{2}\mathbf {b} |\cos \theta =c_{1}c_{2}|\mathbf {a} ||\mathbf {b} |\cos \theta =c_{1}c_{2}(\mathbf {a} \cdot \mathbf {b} )}$
(you knew that, but I'm going to include it in the article). 1 and 2 don't need to explicitly use scalar multiplication since applying the definition where needed gives the results.
If this is to be taken further (including the counterexamples), it should be on talk:dot product also, thanks. M∧Ŝc2ħεИτlk 13:29, 6 July 2013 (UTC)
No need to take it further. All the properties follow from the definition(s) ;-) — Quondum 14:15, 6 July 2013 (UTC)

## July 2013

I noticed the message you recently left to 90.244.53.168. Please remember not to bite the newcomers. If you see someone make a common mistake, try to politely point out what they did wrong and how to correct it. Thank you.

This applies to edit/revert comments just as well as it does on talk pages. Please be sure to WP:AGF until or unless you have a demonstrable reason to do otherwise, and remember that content on Talk Pages is held to a somewhat different standard than edits to actual articles, where commentary and opinion are used in the course of discussion to achieve new insight and consensus.

I cannot claim I grok the discussion re: dipole antennas clearly enough to be certain of the relevance of the user's comments and whether or not they definitely should have been rem'd (as I removed a ranting digression of his/hers on Talk:Heron's formula), but the instant example appears to be both relevant and in good faith to me. Please refer to WP:TPO for guidelines on when it is and is not appropriate to delete other's comments from Talk Pages and try to keep WP:NEWBIES in mind. besiegedtalk 20:10, 9 July 2013 (UTC)

AAAHHH!!! Sorry, yes the post on talk:dipole antenna was a mistake and I reverted it with twinkle, then the talk page of the IP automatically showed up and I posted the wrong message before realizing the mistake and must have clicked save not cancel... and consequently didn't revert. I thought the IP posted material on the article not the talk page. I'll remove the notice and apologize to the IP properly, it all went horribly wrong... M∧Ŝc2ħεИτlk 20:28, 9 July 2013 (UTC)
Wait a second, apart from now, when did I post a message to the IP?
Furthermore, yes, this was all my stupid mistake, but I certainly do assume in good faith, and am aware of the the guideline pages you just pointed out... M∧Ŝc2ħεИτlk 20:42, 9 July 2013 (UTC)
Eh! No worries! I came awfully close to biting this user myself but caught myself at the last minute and tried to make sure I was civil... the digression he posted to Heron's formula was almost impressive, and the post to dipole antenna certainly seemed to be similar in its rambling, but he does appear as if s/he might actually have some potential value as an editor if they can be acclimated to the wiki and its policies :)
Thanks for the good faith reply here, if nothing else, no harm done, no foul, and please, carry on the good work! besiegedtalk 21:06, 9 July 2013 (UTC)

## Disambiguation link notification for July 10

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Siegfried Adolf Wouthuysen, you added a link pointing to the disambiguation page Dutch (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:04, 10 July 2013 (UTC)

## Talk:Energy–momentum_relation

Hi Maschen, would you be willing to comment on my last post at the bottom of Talk:Energy–momentum_relation? Cheers. DavRosen (talk) 18:47, 12 July 2013 (UTC)

Thanks, replied. Regards, M∧Ŝc2ħεИτlk 21:48, 12 July 2013 (UTC)

## … that "l"/"I"/etc glyphs are unclear in what?

Isn’t Template:Diff a mistake? If it is, then feel free to delete this notice upon correction. I post here to avoid cluttering talk: Energy–momentum relation. Incnis Mrsi (talk) 11:05, 15 July 2013 (UTC)

I meant "l"/"I"/etc-like glyphs unclear sans serif. Corrected. M∧Ŝc2ħεИτlk 11:20, 15 July 2013 (UTC)

## Disambiguation link notification for July 17

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that you've added some links pointing to disambiguation pages. Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.
Four-vector (check to confirm | fix with Dab solver)
Matrix multiplication (check to confirm | fix with Dab solver)

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:05, 17 July 2013 (UTC)

## Disambiguation link notification for July 24

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Four-vector, you added a link pointing to the disambiguation page Four dimensional space (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ

1. REDIRECT Template:• Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:15, 24 July 2013 (UTC)

## How stuff rotates

What's nice about geometric algebra is that you can add scalars, vectors, ..., pseudoscalars all together as one giant multivector. Then if you act with a linear operator on the multivector, you can just have it act on all of the individual pieces and add up the results because linear algebra is nice that way. If you wish, you could simply define rotation by rotor R to be ${\displaystyle {\mathsf {f}}(A)=RAR^{\dagger }}$.

Let's think in just 3 dimensions, all positive signature, and let's keep it simple with normalized basis vectors ${\displaystyle \{e_{x},e_{y},e_{z}\}}$. An easy rotation to consider is that by rotor ${\displaystyle R=\cos(\phi /2)+e_{x}e_{y}\sin(\phi /2)}$, which rotates vectors in the x-y plane. It's useful to do these calculations once by hand the slow way:

${\displaystyle {\mathsf {f}}(1)=1}$
${\displaystyle {\mathsf {f}}(e_{x})=\cos \phi e_{x}-\sin \phi e_{y}}$
${\displaystyle {\mathsf {f}}(e_{y})=\sin \phi e_{x}+\cos \phi e_{y}}$
${\displaystyle {\mathsf {f}}(e_{z})=e_{z}}$
${\displaystyle {\mathsf {f}}(e_{x}e_{y})=e_{x}e_{y}}$
${\displaystyle {\mathsf {f}}(e_{y}e_{z})=\cos \phi e_{y}e_{z}-\sin \phi e_{z}e_{x}}$
${\displaystyle {\mathsf {f}}(e_{z}e_{x})=\sin \phi e_{y}e_{z}+\cos \phi e_{z}e_{x}}$
${\displaystyle {\mathsf {f}}(e_{x}e_{y}e_{z})=e_{x}e_{y}e_{z}}$

All this is correct... You can represent the operation by an 8x8 block diagonal matrix, with each block unitary. The scalar and pseudoscalar are unaffected by rotation. (Spin 0?) The basis vectors in the plane rotate whereas the basis vector perpendicular to it does not. If you rotate the vectors in the plane by integer multiples of 2π, then they return to themselves, but a rotation by only π nets you a minus sign. (Spin 1?)

And yet, if you'll allow me to mislead you for a moment... You see that term ${\displaystyle {\mathsf {f}}(e_{x}e_{y})=e_{x}e_{y}}$ doesn't really correspond to the intuitive idea of a rotation. We usually picture ${\displaystyle e_{x}e_{y}}$ as a square in the first quadrant. Attached to two sides of the square are an "arrow a" ${\displaystyle e_{x}}$ pointing right and an "arrow b" ${\displaystyle e_{y}}$ pointing up. If you rotate this square by π, then arrow a points right and arrow b points down. But wait! We can reflect the square back to its original position by multiplying arrow b by -1 and arrow a by -1. Since multiplying by -1 twice gives the identity, we find that if you rotate this directed square by integer multiples of π, then it returns to itself. (Spin 2?) To check that you can't go any smaller, you can also try rotating this square by π/2. You can reflect it once to move it back to its original position, but it will be "upside down." What went "wrong" with the first definition of rotation, ${\displaystyle {\mathsf {f}}(A)=RAR^{\dagger }}$ is that, in the restricted 2 dimensional space given by the x-y plane, ${\displaystyle e_{x}e_{y}}$ acts like a pseudoscalar, which we should not expect to rotate.

If this is the idea we have for how multivectors ought to rotate, then we can define a new linear operator ${\displaystyle {\mathsf {g}}}$, which has all the same major properties in this basis as above except with ${\displaystyle {\mathsf {g}}(e_{x}e_{y})=u(2\phi )e_{x}e_{y}}$, where ${\displaystyle u(t)}$ is some function with period 2π. The only way for this matrix to remain unitary is if ${\displaystyle |u(t)|^{2}=1}$. There are only two functions with this property, namely ${\displaystyle u(t)=\exp(it)}$ and ${\displaystyle u(t)=\exp(-it)}$. We are free to choose, so we define rotation to be

${\displaystyle {\mathsf {g}}(1)=1}$
${\displaystyle {\mathsf {g}}(e_{x})=\cos \phi e_{x}-\sin \phi e_{y}}$
${\displaystyle {\mathsf {g}}(e_{y})=\sin \phi e_{x}+\cos \phi e_{y}}$
${\displaystyle {\mathsf {g}}(e_{z})=e_{z}}$
${\displaystyle {\mathsf {g}}(e_{x}e_{y})=\exp(i2\phi )e_{x}e_{y}}$
${\displaystyle {\mathsf {g}}(e_{y}e_{z})=\cos \phi e_{y}e_{z}-\sin \phi e_{z}e_{x}}$
${\displaystyle {\mathsf {g}}(e_{z}e_{x})=\sin \phi e_{y}e_{z}+\cos \phi e_{z}e_{x}}$
${\displaystyle {\mathsf {g}}(e_{x}e_{y}e_{z})=e_{x}e_{y}e_{z}}$

Was this progress? No, first of all because now we are dealing with complex numbers. We're also trying to keep everything independent of coordinates. Try transforming to a new basis ${\displaystyle \{e_{x},e_{y},e_{z}\}}$. Don't be surprised if the bivectors mix in a highly nontrivial way. Last, with this approach, there's no good way to define objects of spin 3 or higher even though they should possible anywhere that rotation makes sense (i.e. dimension 2 or higher).

Compare how in the familiar, non-GA quantum mechanics, you often define a "tensor" of rank k as an object that transforms with k applications of the rotation matrix, as in [10]. In 3-dimensional space, you can define tensors of arbitrarily high rank. This suggests that the concept of spin above isn't something you should assign to multivectors but rather to multi-linear functions of multivectors. I'm getting sleepy, so I won't elaborate for now, but I think you can ponder this for a while on your own. There are a lot of further subtleties with this theory. For example, about halfway down the page I linked, you see that a rank 2 tensor (similar to the spin 2 bivector above) can be expressed as a sum of a rank 0 tensor, a rank 1 tensor, and a rank 2 tensor, so some effort goes into introducing the spherical basis such that tensor is irreducible.

By the way, that article spherical basis or spherical tensor is red-linked in many places in Wikipedia. Try writing it yourself to help you learn. Teply (talk) 08:11, 22 May 2013 (UTC)

"...the concept of spin above isn't something you should assign to multivectors but rather to multi-linear functions of multivectors" – I think that this is a nugget worth keeping in mind (my own non-authoritative view). — Quondum 11:25, 22 May 2013 (UTC)

Thanks for the clear explanation. Much of the above looks familiar but I will re-read and absorb to be sure, and consider the structure for the spherical basis. Looking forwards to try something soon (hopefully by tomorrow morning). M∧Ŝc2ħεИτlk 15:11, 23 May 2013 (UTC)

I just started the outermorphism article, which is the first step toward doing the GA version of tensor analysis. If nothing else, you could help by adding some references to the page. Another suggestion would be an article on the multiparticle spacetime algebra (or section in spacetime algebra). Teply (talk) 01:37, 26 May 2013 (UTC)
I'll look for some refs and add them, good start. M∧Ŝc2ħεИτlk 05:24, 26 May 2013 (UTC)

Interesting little piece on rotations. How does all this jive with how rotors are naturally spin-1/2? Rotors are not in themselves linear operators on multivectors. Muphrid15 (talk) 05:10, 27 May 2013 (UTC)

They're not? Last time I looked, they seemed to be linear. — Quondum 03:47, 28 May 2013 (UTC)
The rotation that a rotor describes is a linear map, but the rotor itself is an object, not a map. That's what I'm trying to get at. Muphrid15 (talk) 03:55, 28 May 2013 (UTC)
I found a copy of Hestenes & Sobczyk, and am trying to figure out for myself all these "multiforms" and "extensors" in 3-10. Tensors. At first, I was tickled to read this paragraph:

To anyone with much sophistication in linear algebra, most of this section will appear quite trivial. But these trivial things must be mentioned, because they are useful, indeed, they are essential in many applications, and we are afraid that even some of our sophisticated friends will overlook them.

So you can imagine my dismay when I turned the page:

Our concept of tensor is somewhat more general than the usual one, because we allow the value of a tensor to be a multivector of mixed grade. This enables us to handle concepts beyond the competence of conventional tensor analysis. Since our tensors can have spinor values, our formalism can reproduce the results of conventional spinor analysis. Spinors play a crucial role in modern theoretical physics, and to deal with them several 'spinor analysis' formalisms have been developed. These formalisms are similar to tensor analysis but not integrated with it, and they rely on 'spinor coordinates' which are difficult to interpret geometrically. In contrast, the present formalism fully integrates tensors with spinors and provides coordinate-free methods for dealing with spinor tensors. However, as this subject is of rather esoteric interest, we will not develop details here. We are more interested just now seeing how to handle conventional tensor theory.

I guess I'll have to stare at this chapter a while and try to figure out the esoterica on my own or find another reference. Teply (talk) 06:27, 28 May 2013 (UTC)
Don't feel daunted by such an opaque statement. My own impulse is to dismiss it rather than to try to make sense of it. Besides, I think it is not entirely true: tensor algebra does in fact allow the addition of tensors of different "grades" (different tensor powers). I fondly suspect that the claim about tensor algebra being unable to accommodate the double-cover nature of rotors is also more a reflection on the practitioners than on the algebra – after all, every GA is a reduction of a tensor algebra, suggesting that every statement in a GA has an equivalent in a TA. — Quondum 15:57, 28 May 2013 (UTC)

Sorry to cut in bluntly without much reference to this topic,

• I can't touch WP until late Friday evening (exams finally finish then), so before anyone decides to pack up and take the discussion elsewhere, which is definitely fine, everyone is still more than welcome to stay.
• I have been reading the posts here, it's just that I'm less familiar with GA than recent topics of revision (hence biased editing and apparent ignorance).
• Come Friday, I'll look for more refs on spherical tensors and draft the article.

P.S:

• Lately I've been meaning to ask someone what an "extensor (mathematics)" is but never came around to it...
• Agreed that Hestenes & Sobczyk is not a brilliant ref for GA, there is not even one diagram in it, and the text is extremely wordy... M∧Ŝc2ħεИτlk 17:08, 28 May 2013 (UTC)
No worries. I regard Hestenes and Sobczyk as intensely interesting, but also a bit dense. Their chapters on vector manifolds and linear algebra are a vast wealth of information, but compared to Doran and Lasenby or Dorst, Fontijne, and Mann, it seems clear to me that the book is a product of its time: the field has progressed, and other authors have endeavored to present information more simply and clearly. Hestenes papers since publication of the book reflect this, also, in my opinion.
I've also never seen the word extensor used in any other context, and as an example of how the field has moved forward, the proof on the Cayley-Hamilton theorem--which I regard as extremely important for finding eigenblades--is a bit difficult to read because no one else uses simplicial derivatives. Muphrid15 (talk) 02:47, 29 May 2013 (UTC)
I've spent some more time thinking about this question. I thought one way to help us understand is to consider the first spin 2 object people typically encounter in classical physics, the electromagnetic tensor. In the spacetime algebra, as you know, we usually represent this as a bivector:
${\displaystyle F=E^{k}\gamma _{k}\gamma _{0}-(B^{1}\gamma _{2}\gamma _{3}+B^{2}\gamma _{3}\gamma _{1}+B^{3}\gamma _{1}\gamma _{2})}$
If instead we want to view this as a tensor, that is to say (if I understand Hestenes & Sobczyk) a multilinear function mapping multivectors to scalars, there are two ways of doing so. First, we can think of it as a function that maps two vectors into a scalar as
${\displaystyle {\mathsf {F}}(\gamma _{0},\gamma _{0})=0,}$
${\displaystyle {\mathsf {F}}(\gamma _{0},\gamma _{1})=-E^{1},}$
${\displaystyle {\mathsf {F}}(\gamma _{1},\gamma _{0})=E^{1},}$
${\displaystyle {\mathsf {F}}(\gamma _{1},\gamma _{2})=B^{3},}$
and so on. Second, we can think of it as a function that maps a bivector to a scalar as
${\displaystyle {\mathsf {F}}(\gamma _{0}\wedge \gamma _{0})={\mathsf {F}}(0)=0,}$
${\displaystyle {\mathsf {F}}(\gamma _{0}\wedge \gamma _{1})=-E^{1},}$
${\displaystyle {\mathsf {F}}(\gamma _{1}\wedge \gamma _{0})={\mathsf {F}}(-\gamma _{0}\wedge \gamma _{1})=-{\mathsf {F}}(\gamma _{0}\wedge \gamma _{1})=E^{1},}$
${\displaystyle {\mathsf {F}}(\gamma _{1}\wedge \gamma _{2})=B^{3},}$
and so on. There's also a third way to view this as an "extensor" or as a function that maps a vector to a vector:
${\displaystyle {\mathsf {F}}(\gamma _{0})=E^{1}\gamma _{1}+E^{2}\gamma _{2}+E^{3}\gamma _{3},}$
${\displaystyle {\mathsf {F}}(\gamma _{1})=-E^{1}\gamma _{0}-B^{2}\gamma _{3}+B^{3}\gamma _{2},}$
${\displaystyle {\mathsf {F}}(\gamma _{2})=-E^{2}\gamma _{0}+B^{1}\gamma _{3}-B^{3}\gamma _{1}}$
${\displaystyle {\mathsf {F}}(\gamma _{3})=-E^{3}\gamma _{3}-B^{1}\gamma _{2}+B^{2}\gamma _{1}}$.
I think the first of these is in some sense the most general, and exhibits it as an order 2 tensor. We can generally think of order k tensors to be multilinear functions that map k vectors into a scalar. The extensor is the same idea, but allows multivector-valued output rather than scalar output. The second formulation above works only because the electromagnetic tensor has special symmetries. It's with that idea of symmetries that allows the Riemann (ex)tensor to be written as a function that maps bivectors to bivectors. We could instead think of it as an order 4 tensor mapping 4 vectors into a scalar that just so happens to have extra symmetries. The third formulation only seems nice because the input and output are of the same grade. This breaks down for odd-integer rank, where the grade of the input and the grade of the output can't be the same.
As far as rotation/spin properties go, something still seems mysterious to me. If I rotate the original bivector above, with terms like ${\displaystyle \gamma _{0}\wedge \gamma _{1}}$, then by the usual way that rotors act, it seems that I would only need one set of them. So why spin 2? ... Hmm... Maybe some subtlety about active vs. passive rotation...? Still thinking... Teply (talk) 08:16, 4 June 2013 (UTC)
Interesting, so the third case treats F as an extensor? But the idea of a tensor mapping two vectors or a bivector to a scalar seems to be already part of the original nature of tensors. To quote MTW p.74:

"Should one make a distinction between tensors whose outputs are scalars, and tensors whose outputs are vectors? No! A tensor whose output is a vector can be reinterpreted trivially as one whose output is a scalar. Take, for example, Faraday = F [aka EM field tensor]. Add a new slot for the insertion of an arbitrary one-form σ, and gears and wheels that guarantee the output

${\displaystyle {\boldsymbol {\mathsf {F}}}({\boldsymbol {\sigma }},{\boldsymbol {\mathsf {u}}})=\langle {\boldsymbol {\sigma }},{\boldsymbol {\mathsf {u}}}\rangle ={\text{real number}}\,.}$

Then permit the user to chose whether he inserts only a vector, and gets out the vector F(..., u), or whether he inserts a form and a vector, and gets out the number F(σ, u). The same machine will do both jobs. Moreover in terms of components in a given Lorentz frame, both jobs are achieved very simply:

${\displaystyle {\boldsymbol {\mathsf {F}}}(\cdots ,{\boldsymbol {\mathsf {u}}})=F^{\alpha }{}_{\beta }u^{\beta }\,,}$
${\displaystyle {\boldsymbol {\mathsf {F}}}({\boldsymbol {\sigma }},{\boldsymbol {\mathsf {u}}})=\sigma _{\alpha }F^{\alpha }{}_{\beta }u^{\beta }\,.}$

..."

which is all pretty simple and well.
The Lorentz force would be a real well-known example:
${\displaystyle {\frac {d}{d\tau }}{\boldsymbol {\mathsf {p}}}=q{\boldsymbol {\mathsf {F}}}({\boldsymbol {\mathsf {u}}})}$
So the extensor extends this idea to multivectors in GA?
Going back to your description of spin 2 and bivectors above:
"... We usually picture ${\displaystyle e_{x}e_{y}}$ as a square in the first quadrant. Attached to two sides of the square are an "arrow a" ${\displaystyle e_{x}}$ pointing right and an "arrow b" ${\displaystyle e_{y}}$ pointing up. If you rotate this square by Template:Mvar, then arrow a points right and arrow b points down. ..."
it's trivial to visualize bivectors especially in the standard Cartesian basis, but does the rotor work when introducing a timelike basis vector Template:Mvar0 into the spacetime bivectors of the form Template:Mvar0Template:Mvarj? The rotor seems to work on spacelike multivectors. I'll have to check this myself.
Also a possible typo I didn't notice: the ex part of the bivector should point "left" (−x direction) with ey pointing "down" (−y direction) after a π rotation? No matter.
M∧Ŝc2ħεИτlk 14:18, 4 June 2013 (UTC)
Yeah, typo, whatever. The passage you quote from MTW is actually one of the reasons I dislike it. It's taking what could be a great setup for GA and instead destroys it, a textbook example - literally - of what Hestenes et al. are trying to move past. Anyway, now that I have H&S in front of me again, I see I messed up my definition of "extensor" above, basically swapping the input/output. A function linear in r arguments is a tensor if the inputs are vectors or an extensor if the inputs are multivectors. If a tensor of r arguments spits out an s-vector, then it is of grade s and rank (order in WP vocabulary) r+s. A tensor of grade 0, that is, a function that takes r vectors and returns a scalar, is a "multilinear form." You can convert a grade s tensor into a degree r+s multilinear form as follows:
${\displaystyle \tau (a_{1},a_{2},...,a_{r+s})=(a_{r+1}\wedge a_{r+2}\wedge \cdots \wedge a_{r+s})\cdot {\mathsf {T}}(a_{1},a_{2},...,a_{r})}$
There's also a lot here on multiforms. I had been skipping some of the sections on contraction, protraction, and traction because the derivatives made it seem like it had more to do with GC. Perhaps that was unwise of me. I see here among other theorems a unique decomposition of a biform into a sum of invariants in a way that looks very similar to the decomposition of that example Cartesian tensor. There's also the theorem (with proof in a reference) that every blade-preserving multiform of degree r is an outermorphism except when n=2r, in which case it is a composite of an outermorphism and a duality. I'll have to spend more time absorbing this. Teply (talk) 16:22, 4 June 2013 (UTC)
I think I'm getting with it... M∧Ŝc2ħεИτlk 23:33, 4 June 2013 (UTC)
Yikes... I'm making a silly mistake above by conflating tensor order with spin. Electric fields are vectors. If you change to a rotated basis, then the "two rotations," one rotation for each input vector, gives the correct result for transforming a bivector, ${\displaystyle (RaR^{\dagger })\wedge (RbR^{\dagger })=R(a\wedge b)R^{\dagger }}$. If you only did one rotation on an order 2 tensor, you'd get an incorrect result for rotation ${\displaystyle (RaR^{\dagger })\wedge b\neq R(a\wedge b)R^{\dagger }}$. The first spin 2 object people tend to encounter is, very roughly speaking, the square of the spin-1 electric field, given by the Stokes parameters. I'll check the references for anything that might give some insight into Stokes parameters in GA to see if that gets me anywhere. Teply (talk) 19:54, 4 June 2013 (UTC)
Ick... Of course the electromagnetic bivector is spin 1, because even though it's an order 2 tensor, it's the vector part. In your User:Maschen/Spherical_basis notation, it's ${\displaystyle T_{ij}^{(2)}}$ So for objects of integer spin, you need a tensor of at least order q to represent it. My almost embarrassing confusion between spin and tensor order stems from my having learned this the old-fashioned way and the paucity of modern-style GA references to translate it to the GA language for me. Pretty much all of the references deal with spin 0, 1/2, or 1, which in my view are the trivial cases. I can't really find anything that deals with the lowest order non-trivial cases, spin 3/2 and spin 2. It would be very nice to see a paper that deals with these to let the reader see the generalization or better yet to see a paper that deals with the general case.
I did see a section in Doran & Lasenby on Stokes parameters, but it translates everything into left/right circular. On a personal level, I am mostly interested in linear-only Q/U terms. There's a not-so-modern treatment in the Am. J. Phys. article [11]. The modern treatments are as disturbingly recent as 2012, [12] and [13].
You're referring to F in the context of the EM 4-potential
${\displaystyle F_{\alpha \beta }=\partial _{\alpha }A_{\beta }-\partial _{\beta }A_{\alpha }}$
where A is the spin-1 photon field? An interesting connection.
Also, perhaps "spin ≥ tensor order" would summarize the spin number and tensor order number? Here are some papers (mainly spin-1/2), [14] and a thesis which caught attention, and may or may not help.
About Spherical basis, still reading up the material, but I found that we don't even have an article on Cartesian tensors (a redirect), which would feed content into spherical tensor. It may be worth creating Cartesian tensor, provided we avoid too much overlap with dyadic tensor. M∧Ŝc2ħεИτlk 15:38, 7 June 2013 (UTC)

## Symmetry and QM / Irreps

Template:Cot Nice work on the Symmetry in quantum mechanics article.

A closely related topic, that I think could really use some work, is Irreducible representation. At the moment irrep just links to Simple module, which really gives no idea of the physical significance of irreps -- why they are so important in physics and theoretical chemistry.

The key here idea that needs to be put over is Wigner's theorem:

If G is the symmetry group of a Hamiltonian H, then every degenerate eigensubspace of H is globally invariant under G i.e. constitutes a representation of the group G.

...

All IR’s of the symmetry group G of a Hamiltonian H correspond to degenerate eigensubspaces of H.

I started gathering some thoughts on this at User:Jheald/Wigner's theorem and User:Jheald/Irreducible representation, but didn't get very far, as too many other priorities intervened.

But it is something that is a real missing gap, I think, in WP's coverage for physicists and chemists. It's also a fundamental principle to put over, for the material you're currently writing up for Symmetry in quantum mechanics.

And it's something also that I think is fundamental for understanding what spinors are (in the classic sense of the term, rather than the revisionist redefinition by Hestenes etc) -- a long term ambition of mine. I think a spinor (in the classic sense) corresponds to an element of an irreducible representation of a rotation group -- where irreducibility has quite a physical significance.

From what I remember, the book by Volker Heine is particularly good on this (and has been republished not so long ago by Dover),

• Volker Heine (1960), Group theory in quantum mechanics: an introduction to its present usage‎, pp. 41 et seq. Repub. Dover [2007] ISBN 0486458784

I also gathered some other page cites at User:Jheald/Wigner's theorem.

Since you seem to be on an admirable charge in this area at the moment, do you think this is a topic you could fit on to your to-do list?

All best, Jheald (talk) 14:05, 28 June 2013 (UTC)

Thanks for the kind message, but symmetries in quantum mechanics is still far from complete (a few bits have actually gone backwards).
Yes - I'm on a crusade to obliterate the opaqueness of group theory in RQM/QFT where possible.
Definitely - irreducible representation should have it's own article especially with the physics/chemistry applications - when possible we can create it.
I'd be happy to help with User:Jheald/Wigner's theorem and User:Jheald/Irreducible representation where possible, but you'll find I'm still relatively new to group theory, and would probably be asking you more questions then contributing content and not be of much help... (no clue on "double covers", "covering group", "little group", still struggling with isomorphisms between groups and the SL and GL groups... a huge setback compared to your level).
Some aspects of Lorentz group theory are still really confusing and I'm still reading up the material and adding to WP as concepts are learned, and will look for V. Heine's book.
Spinors would be nice to understand, to an extent they are in representation theory of the Lorentz group and symmetries in quantum mechanics, as elements of rotation groups. Although, I'm not entirely sure what you mean by "corresponds to an element of an irreducible representation of a rotation group", is the "irreducible representation" a subgroup of the rotation group in question? I thought "representations" are linear functions of group elements and generators, which preserve the group product.
Thanks again and regards, M∧Ŝc2ħεИτlk 17:28, 28 June 2013 (UTC)
Would these be helpful?
(for my own reference also). These came from a Google search of "irreps notation" and "irreducible representation of the lorentz group". M∧Ŝc2ħεИτlk 18:43, 28 June 2013 (UTC)
I also encourage improving the treatment of irreps as discussed above. By the way, you should look at this related DYK proposal. Teply (talk) 03:31, 29 June 2013 (UTC)
I think you misunderstand the purpose of DYK. It's mostly about featuring articles that are new rather than those that are perfected. Please reconsider. Teply (talk) 07:29, 29 June 2013 (UTC)
I'm not really sure what else to say there... M∧Ŝc2ħεИτlk 08:24, 29 June 2013 (UTC)
It turns out there's actually a copy of Volker Heine's book at the Internet Archive. [15]. What it's doing there I'm not quite sure, because as far as I can see it absolutely must still be in copyright. But anyway, it's there if you want to have a look; but it's still in print and available from online retailers, so it should be straightforward to buy a copy if you like it.
For a chemist's view, a book I picked up at university was P.W. Atkins Molecular Quantum Mechanics, which I see is now up to its 5th edition; that has a chapter on group theory, which inevitably gets into the relevance of irreps.
It does seem to me that, before jumping into relativity and field theory, it probably does make sense to get a feel for group theory in the context of the non-relativisitic Schrodinger equation first (albeit perhaps with spin), so I do think what I've mentioned above (or similar) probably are probably a good first stop, to get clear first, before some of the materials you've links.
A "representation" of a group is simply a way of mapping each element of a group to a matrix, with the group combination operation mapping to matrix multiplication.
Representations can be one-to-one or many-to-one (including everything mapping to the unit matrix as one particular (but important) trivial representation).
A representation is reducible if you can find a similarity transformation P to send A to P A P-1 which breaks every matrix in the representation into the same pattern of diagonal blocks -- each of the blocks in itself is then a representation of the group, independent of the others.
If this is not possible, then the representation is irreducible -- an "irrep".
The patterns of quantum states with the same energy are closely related to such irreps: this is how their symmetry relates to the symmetry of the overall Hamiltonian. So if you know the symmetry group of the Hamiltonian, and you know its irreps, then you know what symmetries of quantum states to expect. Also, if you make a perturbation and break a symmetry of the Hamiltonian, then you can expect particular breaks in the symmetries of the quantum states.
That is what made quantum theorists so particularly interested in group theory (leading to the "gruppenpest", or "groups-plague" as it was called), and so is argually the fundamental motivator for the article you've just written, from Wigner as early as 1926, right through to the interest in broken symmetries in '60s, '70s and '80s onwards. So it's definitely something you should aim to get your head around. Jheald (talk) 15:42, 29 June 2013 (UTC)
Thanks for nailing the clarification of representations, but I have gradually become aware of such things (i.e. in terms of matrices) and the similarity transformation, but when you mentioned spinors I became sidetracked and thought you may have meant something else subtly different. No matter.
Thanks also for the link to Volker Heine's book, a good time saver.
Also I have access to and come across Atkins' book on Molecular QM, the 1st and 2nd editions, not the 5th. I'll look in there.
Considering your mention of Wigner in the 20's and the later decades, that should form a history section for the Symm in QM article, but that's not the top priority for that article. M∧Ŝc2ħεИτlk 16:48, 29 June 2013 (UTC)
If it's OK, I thought to start user:Maschen/irrep, using a bit of Jheald's crystal-clear phrasing above. If we agree, we can merge it with User:Jheald/Irreducible representation to produce the real mainspace article. M∧Ŝc2ħεИτlk 21:03, 29 June 2013 (UTC)
Feel free to take anything you want -- it's all there for reuse -- though be aware that the stuff in my sandbox pages never really got to the stage of anything beyond some very preliminary gatherings-up of snippets for future use; so it's probably enough just to mine out anything you think is useful and leave the rest.
It's good to see your sandbox irrep article starting to build up material. If I can comment (and I know it's very early days so far), one thing I'd say is that it does seem a bit of an algebra-fest so far. The most important question for the article to address (IMO) is why irreps? Why are they something that are so important to physicists and chemists (even if mathematicians seem to fast-track straight through to simple modules). This why question is, to me, much more important to address than trying to list what the irreps turn out to be for particular key groups -- though a simple example, eg irreps of in-line quantum vibrations of a linear molecule A--B--A; or classical normal mode irreps of a triangular arrangement of equivalent springs Δ may be revealing.
It's the key correspondence between irreps and degenerate eigensubspaces of the Hamiltonian H, that IMO is the key anchor here to bring out. Interestingly Template:User-multi seems to have made much the same point at Talk:Symmetry in quantum mechanics#Section ordering and general introduction, the connection between degeneracies and symmetries. Unfortunately our article Degenerate energy levels is still quite weak, but it could also do with usefully making the same point -- plus a discussion of "accidental degeneracies", and how such accidents can sometimes be understood as reflections of deeper symmetries (eg here's a discussion I noted on the apparently accidental degeneracy between Hydrogen s and p orbitals, though it may need to be simplified and glossed and focussed a bit for a wiki audience).
From there (in the Symmetry in quantum mechanics article) one should probably go to symmetry breaking -- which energy levels perturbations can and can't split; also perhaps spatial symmetries of time-dependent perturbations and how these connect with superselection rules. One might also present the group theory of how spin states get split by a magnetic field, and spontaneous symmetry breaking, eg in the form of the Jahn-Teller effect.
My understanding is that it was Wigner's papers of 1926 and 1927 that really spelt this out and put it on the map for the early quantum theorists; even though arguably it may all be implicit in Noether's theorem. Google found what looks like an interesting conference paper by Arianna Borrelli and Bretislav Friedrich on the historical development, presented in July 2008 (Abstract, Conference programme; but sadly I am not sure whether the full text is available -- the nearest I've found so far is this. But something like that might be useful to anchor a few lines on the development of the historical appreciation and understanding. Jheald (talk) 17:06, 30 June 2013 (UTC)

Yes - it only scratches the surface, a hell of a lot more to do yet (more replies will follow). M∧Ŝc2ħεИτlk 08:47, 1 July 2013 (UTC)

### Spinors

Let me additionally attempt to clarify what I was trying to allude to, regarding spinors. Here's my present understanding of how everything fits together, to the extend that I can currently connect the dots:

It's a basic well-known fact that elements of a Clifford algebra (including rotors) can be faithfully represented by sufficiently big linear matrices (even if this isn't usually the best way to do Clifford algebra calculations in a computer, it is one way that such calculations can be done), with matrix multiplication of the matrices corresponding to Clifford multiplication of the Clifford algebra elements.
So the rotations can be represented by rotors, which can in turn be represented by spin matrices. And so the spin matrices form a representation (in the group representation sense) of the group of the rotations.
Now it's often useful to think of matrices as encoding transformations of a set of unit basis vectors, each basis vector mapping to one of the columns of matrices in the usual way. So we're familiar with Euler matrices mapping unit vectors ${\displaystyle \scriptstyle {{\hat {x}},{\hat {y}},{\hat {z}}}}$ to a triad of new orthogonal vectors on the surface of a sphere. This gives us a space, which we can think of the matrix as transforming.
The question is what is the equivalent of this picture when instead of Euler rotation matrices the matrices in question are spin matrices corresponding to Clifford algebra rotors. Again, we can think of those matrices as mapping a set of unit basis vectors to a new simplex of orthogonal column vectors. But these column vectors are no longer the familiar spatial vectors ${\displaystyle \scriptstyle {{\hat {x}},{\hat {y}},{\hat {z}}}}$. Instead they represent a basis (and, under transformation, a space) of some other kind of object. It's a new kind of object, with properties to be determined, so we give it a new name: spinor.
What do these spinors represent? It's not immediately clear. Geometric algebra, however, gives an interpretation of the spin matrices, mapping them to elements of the Clifford algebra. Furthermore, column vectors are isomorphic to matrices with all but a single column set to zero. Which in turn can be mapped to a subspace of the Clifford algebra -- with an idempotent projecting matrix like φ = ${\displaystyle \left({\begin{smallmatrix}1&0&...&0\\0&0&&\\\vdots &&\ddots &0\\0&...&0&0\end{smallmatrix}}\right)}$, that zeroes all columns but one, corresponding to an idempotent projector in the Clifford algebra, something like ${\displaystyle \scriptstyle {(1/{\sqrt {2}})\,\left(1+\mathbf {e} _{1}\right)}}$, where e12 = +1.
So the space of spinors corresponds to a subspace of the elements of the Clifford algebra.
We can take this one step further, by noting that, as a group representation, the representation given by the spin matrices may be reducible -- we may be able to find matrices P to map the set of spin matrices to P D P-1, where D is block diagonal. This can be related to breaking the Clifford subspace into smaller subspaces A ⊕ B ⊕ C ..., each of which transform independently under (one-sided) multiplication by a rotor. It's the elements of these spaces, spaces that map to themselves under irreducible representations of the spin-rotation group, that are what correspond to what are traditionally called spinors.
Previously above, we've already started to see (and been trying to effectively express in words) how linear equations that reflect certain symmetries have sets of eigen-solutions that each closely relate to one or other of the irreps of the group of that symmetry.
The Dirac equation (considered in abstract, rather than breaking it into components) is such a linear equation, the form of which transforms in an invariant way under rotations+boosts corresponding to particular symmetries (here, operators representing one-sided spin-rotations -- eg left multiplication by rotors)
We therefore expect its solutions similarly to relate closely to the irreps of the group of that symmetry -- i.e. to relate closely to the relevant Clifford algebra sub-spaces we have just identified, even without having had to pre-define in any column structure or matrix representation "by hand".
And similarly for other equations that reflect other symmetries that can be represented by Clifford algebra rotors.
So this is why the study of the irreducible representations is important, and the corresponding spinors.
...

There are a few bits I'm still rather fuzzy on:

• I'm not nearly as sharp as I should be on the precise connection between irreducible representations and the eigensolutions of symmetry-compliant linear equations. (Which is why a nice article on irreps to make it crystal clear again is so high on my list of personal Christmas wiki-wishes. IIRC, I got this distilled down to a few lines of bulletproof boilerplate text for my undergraduate Physics finals. Unfortunately I can't now quite reconstruct what those lines were).
• I don't feel I completely grok the meaning of the Clifford algebra subspaces that correspond to irrep spinors, and in particular the idempotent factors like ${\displaystyle \scriptstyle {(1/{\sqrt {2}})\,\left(1+\mathbf {e} _{1}\right)}}$ that give every element a rather curious superposition of oddness and evenness (or similar, for other projectors), superposing even grade and odd grade. Does one just factor out these factors and parametrise the factor spaces that are left? (cf eg Penrose's 2-component spinors)? Or is there some important significance coded in that superposition that I can't just quite stretch far enough to touch with my fingertips ?
• How does it all work out in Euclidean low-dimensions -- eg Euclidean 2D and Euclidean 3D. There is an article Spinors in three dimensions, but it would be nice to create a version that was much more Geometric Algebra driven (even if it had to remain a substantially private fork). The Spinors#Examples section of the main Spinors article is embarassingly wrong, and also doesn't really convey what's going on and why; which is my fault, because I contributed it at a time when I now know I didn't understand what spinors were. The main spinors article should review 3d and either 2d or Minkowski 4d spinors, but after the "explicit constructions" section; and from a position of actually knowing what spinors are. But myself I don't (yet) (quite) feel I know myself what spinors are...

Anyhow, that's the block of mental stuff that was in mind, that I was trying to allude to when I dropped a mention of the word 'spinor' in a few sections above.

I hope there's a pathway there that starts to make at least some sort of sense. Of course, I may have substantial things wrong. Any and all corrections, clarifications, sortings-out, feedback, or general thoughts would be very welcome. All best, Jheald (talk) 19:59, 30 June 2013 (UTC)

I've been following the ongoing discussion about irreducible representations and CA/GA. I admit, knowing much more about the latter than the former, I may be way out of my depth trying to make the connection, but I feel like it must be very important for translating the extant knowledge about irreps to information about CA/GA rotors, or the transformations they underlie. I'm afraid I'm not sure I can get at answers to your main questions, but I do have some comments regarding some of your remarks.
First, you say that rotations can be represented by rotors, which in turn can be represented by spin matrices. Do you mean that a rotor ${\displaystyle \psi }$ could be equal to a linear combination like ${\displaystyle \psi =(1+\sigma _{1}\sigma _{2})/{\sqrt {2}}}$? (Where if we interpret this in terms of matrices, then 1 means the identity matrix.) I ask because the usual GA viewpoint is that the objects like ${\displaystyle \sigma _{1}}$ are vectors--or perhaps, they could be taken to represent a reflection the same way a rotor represents a rotation. We assume some basic form of the linear map, and by doing so, we reduce the amount of information we need to describe the transformation. This is what's done with rotations: we know a rotation has the basis form ${\displaystyle {\underline {R}}(a)=\psi a\psi ^{-1}}$. We could do the same with reflections, knowing that any reflection is described as ${\displaystyle {\underline {N}}(a)=-nan^{-1}}$. Perhaps rotations and reflections could be unified by taking all linear maps of the form ${\displaystyle {\underline {T}}(a)=\ldots (-n_{2}(-n_{1}an_{1}^{-1})n_{2}^{-1})\ldots }$.
At any rate, I have this sneaking suspicion that irreps are somehow tied to this intrinsic information that a rotation always has this form.
You ask what the equivalent picture to unit vectors being mapped to new vectors is for matrices representing CA elements instead. To be honest, I'm lost as you go through this argument of what it is we call a spinor. To me, we already have a spinor like I described in ${\displaystyle \psi }$ and if we want to know the action of the rotation on a basis vector, then we have something like ${\displaystyle \psi \sigma _{1}\psi ^{-1}}$. These can be written in terms of actual Pauli matrices, no problem. I think, for this reason, I'm very lost as you go on to talk about idempotents.
I agree with your characterization of how to break a reducible rep into subspaces, one subspace of which corresponds to that of spinors.
Since, again, I am very very unfamiliar with irreducible reps, I do admit I have no idea what you mean by "Clifford algebra subspaces that correspond to irreducible reps". Does such a statement even have meaning? CA/GA is a way of talking about certain objects and things without a representation at all, is it not? If so, then can anything be said to correspond to a certain representation, reducible or not?
Hopefully I've not babbled incoherently in front of people who know way more about all this representation theory stuff than I do. I apologize if I have and, in doing so, have wasted anybody's time. Muphrid15 (talk) 23:09, 30 June 2013 (UTC)
Let me try and pick some of that up.
Yes: in GA ${\displaystyle \scriptstyle {\sigma _{1}}}$ and ${\displaystyle \scriptstyle {\sigma _{2}}}$ correspond to spatial vectors -- unit directions in 3d space. But if you want to represent them in a computer, well there are actual various paths you can go down, but one of the most straightforward is to use the Pauli matrices to represent ${\displaystyle \scriptstyle {\sigma _{1}}}$, ${\displaystyle \scriptstyle {\sigma _{2}}}$, and ${\displaystyle \scriptstyle {\sigma _{3}}}$. It's a standard result that every group has a matrix representation, with the group composition operation achieved by matrix multiplication; this happens to be a very common matrix representation for C3.
So the composed rotation R2R1 can be represented using the geometric product of rotors ψ2ψ1 which in turn can be faithfully represented by the matrices M2M1, the composed rotation mapping a spatial vector a to ${\displaystyle \scriptstyle {\psi _{2}\psi _{1}a\psi _{1}^{-1}\psi _{2}^{-1}}}$, which in turn can be represented in a computer with the matrix product ${\displaystyle \scriptstyle {A^{\prime }=M_{2}M_{1}AM_{1}^{-1}M_{2}^{-1}}}$.
A really un-needed difficulty is that Hestenes and followers have taken to calling things like ψ1 "spinors". They may have good reasons for this, but it is important to realise that this is not what the word "spinor" has traditionally referred to. Traditionally, a "spinor" has meant a column of complex numbers operated on by a spin matrix, ie by something like M1 or M2 above. This fits with the classic view of a matrix a column of numbers into another column of numbers, with the columns of the matrix being the image of the columnar basis vectors. So starting from that perspective, there is interest in knowing what objects such columns of complex numbers represent. It's not something people find easy to answer.
So these objects which can look like columns of complex numbers are what I'm interested in understanding. They do not correspond to the ordinary spatial vectors of 3d space. But we can treat them as a linear basis defining some new, different linear space, which is traditionally what is called spinor space.
GA finds a way to answer what these things might be, by identifying the columns of complex numbers as parts of spin matrices (with the other columns zero), which can then be identified with objects in the Geometric Algebra. But note that since a typical arbitrary GA object maps in the GA matrix representation to a full matrix, rather than just a single column, that implies that what these objects correspond to is not any old arbitrary GA object, or even any old arbitrary GA rotor, but something rather more specific. In fact they map to objects in a closed sub-algebra of the original GA, e.g. the sub-algebra you get if you apply something like the idempotent projector ${\displaystyle \scriptstyle {(1/{\sqrt {2}})\,\left(1+\mathbf {e} _{1}\right)}}$.
Furthermore, if you insist that the spinor space corresponds to the space of possible images of an original GA element under the action of an irreducible representation of the rotation group, the matrix block-diagonalisation transformation corresponds to breaking up that sub-algebra into the direct sum of even smaller sub-sub-algebras, obtained by multiplying all the elements of the subalgebra by further idempotents.
Typically these idempotents sum to one, so if we write them eg as 1 = E1 + E2, and the elements of the original sub-algebra as {SA}, then in GA terms what we're doing is a further decomposition of the sub-algebra,
${\displaystyle \scriptstyle {SA\,=\,(E_{1}+E_{2})\{SA\}(E_{1}^{T}+E_{2}^{T})\,=\,E_{1}\{SA\}E_{1}^{T}+E_{2}\{SA\}E_{2}^{T}}}$
so the previous sub-algebra has been decomposed into two independent sub-sub-algebras; and any rotation rotor can be similarly split into separate parts which act on each of the sub-sub-algebras independently.
The question then is how far can you take this? And what do those sub-sub-algebras you end up with represent?
So that's how (I think) some of this translates into GA terms, without necessarily any particular co-ordinate representation.
Hope that may clarify a little bit. Jheald (talk) 00:44, 1 July 2013 (UTC)
Sure, I've pondered implementing a GA on a computer using Pauli or Dirac matrices in the past (and I've been curious how one can, in general, construct a set of such matrices for aribtrary signatures and dimensions). Let me ask a dumb question, though. You say it's a standard result that every group has a matrix representation, and so on and so forth. What is the group that we're representing when we choose to use actual matrices to talk about our basis vectors? Are we talking about the orthogonal group, or perhaps the special orthogonal group?
You say that the spinors (as column vectors) must correspond not just to rotors but something more specific than rotors. I don't follow this point. I know rotors themselves are closed in 3d under the geometric product; in spacetime I suspect they're not, but I'm not as familiar. You say that the kind of object corresponds to a spinor must be more specific than just a rotor; when you talk about idempotent projectors, this makes me think we're in fact talking about something more general.
It'll probably take me a little while more to parse all this, but basically, I see your goal as trying to take the closed sub-algebra of rotors (as nested within the higher GA) and break it down into invariant sub-sub-algebras until it can't be broken down any further? And then, you wish to identify the significance of these sub-sub-algebras and what transformations they may underlie? And you think that by identifying these fundamental subalgebras (which can be said to have no smaller closed subalgebra inside of them), we can construct a representation for any group that is guaranteed to be irreducible? Is that your ultimate intention? Muphrid15 (talk) 03:26, 1 July 2013 (UTC)
Oh dear, I seem to be colonising Maschen's talk page. I hope he doesn't mind too much.
Groups: the group of all rotations in n Euclidean dimensions is SO(n). This is a subgroup of the group of reflections and rotations, which is O(n). The group of corresponding GA rotors, or equivalently of their matching spin matrices, is Spin(n). There is a 2-to-1 relationship between the members of Spin(n) and SO(n), because both the rotor R and the rotor -R give the same rotation. This 2-to-1 relationship is called a double covering (the other requirement for a covering being that a small change in the rotation is matched by a small change in the rotor(s), so the mapping is continuous). If we allow reflectors as well as rotors, we get a group which stands in the same relation to Spin(n) as O(n) is to SO(n), and therefore by analogy is called Pin(n).
So: the group of rotors, that we're representing with the spin matrices is Spin(n). If we also allow reflectors, then the group is Pin(n).
Matrix representations of GAs: Ian Bell has a section on matrix representations on this page, about a third of the way down. We also have a page giving an explicit recipe for GAs of the form C1,n: Higher-dimensional gamma matrices.
Closure: the rotors are closed. With boosts I suppose you have the issue of the limit of a sequence of boosts potentially being a boost to the lightcone, which can't be inverted. So maybe with boosts that means we've got a semigroup rather than a group; but it's not something I've really thought about.
More specific/more general. I think if you apply a projector to a set of rotors, you end up with a set of rotors in the projected space. But possibly what you actually get (if I was thinking more carefully) is a set of GA elements that are isomorphic (1-to-1) to the set of rotors in the projected space. But what I was principally trying to say, was that (as I understand it) what traditionally are called spinors correspond to a sub-subset of the elements of a GA; a smaller set than the Hestenes definition "any even multivector that maps a GA vector to a GA vector" (Some more quotes).
Overall big picture: What we're doing is taking the overall Geometric Algebra, and breaking it into bits which stay distinct, and so transform independently (do not mix together again), when we apply rotors that correspond to specific symmetries that we're interested in. This is the GA analogue of the transformation that maps the matrix representations into diagonal blocks (irreps).
The reason that we're interested in these is that eigenfunctions of a linear equation that has such a physical symmetry should transform in a 1-to-1 analogue to the way that one an element of one of these sub-algebras transforms. So by identifying the different possible sub-algebras, one identifies the different possible types of eigensolutions. So for example, if we're looking at the Dirac equation, and we have a spin 1/2 eigensolution, we should be able to relate the way that transforms under spatial transformations to one of these particular sub-algebras. Similarly, if we have a different equation, with a different symmetry, we should be able in a similar way to identify the sub-algebras that can correspond to the states of solutions, and use the states of the sub-algebras to label in a 1:1 way the possible states of the equation solution, as they get transformed by the symmetry operator. This is something Hestenes famously did for the Dirac equation, but the principle is general. Jheald (talk) 12:26, 1 July 2013 (UTC)

These discussions are always interesting but given the length I will have to read the posts in detail and respond later - right now I'm a bit torn between several articles on WP, and clearing up mistakes/inline citations is important. More replies to follow. Thanks, M∧Ŝc2ħεИτlk 08:47, 1 July 2013 (UTC)

Sorry for flooding your talk page. Hope you don't object too much! Lots on your plate, I quite understand. Jheald (talk) 13:19, 1 July 2013 (UTC)
No need for anyone to apologize - although shorter posts would be easier for all of us to read, thanks! ^_^
A lot of the above is familiar, especially the idea of using Pauli/Dirac matrices as basis vectors, etc. If there is anything everyone can agree on about (classical) spinors, it's certainly that they cannot be visualized directly, instead indirect "topological analogies", such as the infamous "book and belt" (c.f. Penrose in The road to reality) or "rotating palm", offer some insight.
As for spinors in GA - one good read is Hestene's New foundations for classical mechanics (p.51) where he defines a "spinor" as something that looks like a complex number:
${\displaystyle z=\sigma _{1}\mathbf {x} =x_{1}+\mathbf {i} x_{2}}$
where
${\displaystyle \mathbf {x} =(\mathbf {x} \cdot \sigma _{1})\sigma _{1}+(\mathbf {x} \cdot \sigma _{2})\sigma _{2}=x_{1}\sigma _{1}+x_{2}\sigma _{2}}$ is a 2d vector in the ${\displaystyle \sigma _{1},\sigma _{2}}$ basis,
${\displaystyle \mathbf {i} =\sigma _{1}\sigma _{2}=-\sigma _{2}\sigma _{1}}$ is a unit bivector.
This seems to be what you two are getting at. Apparently it is supposed to coincide with the notion of a spinor in QM but I have yet to properly understand why... M∧Ŝc2ħεИτlk 21:34, 1 July 2013 (UTC)
Well, spinors as rotors, I should have said. I didn't mean to say topological analogies would help either (they don't really). M∧Ŝc2ħεИτlk 19:08, 2 July 2013 (UTC)
I suspect they possibly might be relevant, if I understood things at a deep enough level -- related to SO(n) being not simply connected but Spin(n) is. But I've always wanted to see a bit more concretely exactly what crucial property of the system can be preserved in a model using conjugation with rotors/spin-matrices for the rotations, that isn't modelled (or isn't preserved) in a model just using Euler matrices. Jheald (talk) 19:33, 2 July 2013 (UTC)
No -- there Hestenes is just using spinor as a synonym for rotor, in line with his more general re-definition of the term as "an even multivector that maps a GA vector to a GA vector" on conjugation. This is not the traditional meaning of spinor in QM, which is what I was getting at.
I'm very wary of analogies like the book and belt. If these really are well-described by spinors, then give an explicit mechanical mapping. Moreover, spinors describe solutions of equations that are invariant under symmetry transformations. But there seems to be no invariance in the topological party games. I'm not saying there isn't a deep connection; but IMO if an example like that is going to be useful then the deep connection needs to be made mathematically explicit, and explained.
A properly successful interpretation of spinors, IMO, is the isomorphism Hestenes sets up between traditional Dirac spinors and a subset of elements of the Geometric Algebra, so the different spinor states can be directly interpreted in terms of different physical rotation-boosts of a reference object. The machinery underpinning that is what I was trying to sketch out above. With an identification to appropriate elements of the GA, arguably then you can visualise spinors directly.
But for my money, Hestenes then makes things very confusing by re-inventing his use of the word "spinor" away from something corresponding to the traditional mathematical object. Jheald (talk) 22:12, 1 July 2013 (UTC)
That's not to say there may not be a rationale for Hestenes's redefinition. As I understand it, a spinor is essentially an element of a space that consists of all the possible images of the spinor under the action of the group -- so a spinor is something the group can act on. (The group being a spin group). Now if we think of the elements of the group as complex matrices, something for them to act on is a column of complex numbers. But if we think of the elements of the group as rotors, something they can act on is another rotor. And so Hestenes's generalisation does sort of make sense. But traditionally, the thing acting on the spinor is either irreducible or close to it; because really what we're interested in is the sort of spaces that express the scope of an irreducible representation. But the action on Hestenes's "spinors" is much further from being irreducible -- because Hestenes's spinors correspond to whole matrices, rather than column-vectors, in the matrix representation picture; and the action of the group on such a matrix can immediately be reduced into the action on each column, because the result of applying any element of the group to each column doesn't depend on the state of the other columns.
So: I can sort of see a possible justification for Hestenes's generalisation/redefinition; but (at least based on my current level of understanding) I still wish he wouldn't do it, and I would urge others not to follow him in it. Jheald (talk) 22:32, 1 July 2013 (UTC)
I think one reason for using the word "spinor" is that rotors are taken to be normalized, while spinors are not--it's the difference between using mere reversion in the definition of a linear transformation and an actual inverse.
I think something we need here is an idea of what "irreducible" means in the context of GA. I don't think we should get hung up over what is or is not reducible in terms of matrices, only what is intrinsic to the algebra itself. Something I was thinking about in context of this discussion, then, was ring theory and ideals. It strikes me that what we want out of something broken down into sub-sub-algebras is similar to some kind of minimal ideal. The analogy isn't complete, however: they usually talk about ideals in the context of a subring that can be acted upon by any element of the ring and stay in the subring, which isn't what we want here. We want a subring that, if I understand correctly, is acted upon by some member of another subring and stays in its own subring. So, while similar to an ideal, what we're talking about is nevertheless quite distinct. Inconveniently, I haven't yet found a name in the literature for such a concept, but I would bet money that it's out there. Muphrid15 (talk) 23:04, 1 July 2013 (UTC)
"I think something we need here is an idea of what "irreducible" means in the context of GA." That would help and be a good section for the irreps article when created. What operations in GA can be used for decomposing reps into irreps? M∧Ŝc2ħεИτlk 19:08, 2 July 2013 (UTC)
That's fairly easy, I think. The operation is left-multiplication by a pair of idempotents, like ${\displaystyle \scriptstyle {(1+\mathbf {e} _{1})/{\sqrt {2}}+(1-\mathbf {e} _{1})/{\sqrt {2}})}}$, if those idempotents commute (or anti-commute) with the set of rotors in question -- eg those two commute with ${\displaystyle \scriptstyle {\cos(\theta )+\mathbf {e} _{2}\mathbf {e} _{3}\sin(\theta )}}$, but not with ${\displaystyle \scriptstyle {\cos(\theta )+\mathbf {e} _{1}\mathbf {e} _{2}\sin(\theta )}}$. Correction: actually they do!
So the split into two projected sub-spaces is preserved in the first case under the rotation, but not in the second. under either of the rotations -- so for general rotations, Cl3 is reducible in this way. Jheald (talk) 19:44, 2 July 2013 (UTC)
Shall have to think about this. M∧Ŝc2ħεИτlk 20:23, 2 July 2013 (UTC)
Lounesto's book may be the best starting point in this area, and very much is full of rings and ideals etc. Page-refs I have from a previous discussion are 60 on Pauli spinors and how they can be mapped into the GA, 143 for Dirac spinors, but you may want to rewind back a few pages in both cases. Very recommended anyway as a book to have, and probably the place to go to turn my handwaving above into proper Mathematics. Jheald (talk) 23:33, 1 July 2013 (UTC)
Rings and ideals... No clue (and I'm too ignorant of abstract algebra, anything beyond vector spaces and group theory looses me). If you guys know what you're doing then great. M∧Ŝc2ħεИτlk 19:08, 2 July 2013 (UTC)
Just to chip in an OR observation that may not be of relevance: a spinor's action (at least in physics) presumably corresponds to use of the (complex conjugate) reverse in the sandwich product, never the inverse, whereas a rotor's action I've seen often defined as using the inverse rather than the reverse (in so obviating the need to include normalization in the definition of the rotor). An interesting point is that a rotor cannot be normalized (such that the reverse is equal to the inverse) if the product of the rotor and its reverse is negative, and hence there is a nontrivial formal difference between the two sandwich products. This suggests that either you have to disqualify a whole pile of products-of-an-even-number-of-vectors as "rotors", or you have a non-trivial algebraic distinction between the two sandwich products. This seems to make spinors and rotors fundamentally different at an algebraic level, with either definition of the rotor action, unless you apply the restriction to both rotors and spinors. — Quondum 12:50, 2 July 2013 (UTC)
By "spinor" are you meaning a spinor per Hestenes's use of the word, or a spinor per the traditional understanding of the word? Jheald (talk) 13:39, 2 July 2013 (UTC)
Hestenes's use? M∧Ŝc2ħεИτlk 19:08, 2 July 2013 (UTC)
Maschen's probably correct here, but I can't answer for sure. I wasn't trying to clarify anything about spinors, but rather that rotors seem to differ from spinors, however you define the latter, by virtue of the operations they can partake in. — Quondum 02:52, 3 July 2013 (UTC)
That's what I thought. The thing is that what Hestenes uses the word "spinor" to mean is something very straightforward, easy to understand, pretty trivial to grasp, no unexploded bombs. But it's a different concept to what the word "spinor" has traditionally meant. (Though there may (sometimes) be relationships and isomorphisms between the two). It's the latter, traditional concept that is the challenge to understand, and that I'd like to understand better; I think GA can significantly help; but what IMO doesn't help is taking the name always used for that concept away, and using the name for something else. Jheald (talk) 19:53, 2 July 2013 (UTC)
Yes, they are different things and it would help if people didn't change the terminology.
The classical definition somehow does seem more interesting because it is difficult to interpret... M∧Ŝc2ħεИτlk 20:23, 2 July 2013 (UTC)

Unless I've missed someone mention it above, one thing which hasn't been raised about traditional spinors is their extension to "multi-index spinors" = "multi-spinors (?)" (not spin tensors according to this WP article, unfortunately a misleading name), i.e. spinors with an array of components just like tensors but they do not transform in the same way. Physical examples include the solutions to relativistic wave equations.

Does anyone know the equivalent in GA? Irreps related to tensorial operators and multi-index spinors would be a nice extension in the future (not necessarily now)...M∧Ŝc2ħεИτlk 20:47, 2 July 2013 (UTC)

## Svgs and function examples

Hiya. Thanks for the offer on the svgs on this page. I don't think it is going to be needed :), but I appreciated the thought very much. With regards to Function_of_several_real_variables#Examples, I thought they were good examples and well explained within the notation, context and level of this and other similar articles and helped to provide some insight into the definition. Lfahlberg (talk) 19:06, 28 June 2013 (UTC)

No problems, and thanks for your kind feedback also. ^_^ M∧Ŝc2ħεИτlk 21:04, 28 June 2013 (UTC)
Hiya again. I would like to fix two svgs, but I don't know how to download svg (I keep getting the pngs). Can you tell me how to do this or direct me to a page - I cannot find anything. Thanks so much. Lfahlberg (talk) 06:52, 8 August 2013 (UTC)
Sure. If you try downloading at the file page, for some reason only PNGs are possible.
To download SVGs from WP: simply click on the image in the article to get to the WP file page, then at the WP file page click on the image again to find its original form (the picture should be on a blank background). There it should be possible to right-click and save as SVG in a directory. With the SVG editor, you can open the file and edit the picture.
For completeness (I know it's obvious): unless you already have an SVG drawing program, you can download SVG images but can't modify them. The standard, free, and more or less default program for SVG is inkscape, free for download at their website.
Be warned of a subtle irritation: the Wikimedia software can tweak images when uploaded (especially fonts, unless the text is converted to shapes). When downloading an already uploaded picture, some elements (like font faces, line thicknesses, arrowheads...) may change or become reduced to what the program can handle.
Hope this helps, feel free to ask for anything. ^_^ M∧Ŝc2ħεИτlk 08:01, 8 August 2013 (UTC)
Thanks so much for the complete and accurate directions , the quick reply and the offer of help! I thought since I only wanted to change text on these images, I should be okay with a text editor, but I see this is not, so now I am downloading inkscape. I will let you know how it goes :) Lfahlberg (