# Talk:Error function

## FUBAR!!! ERROR in the ERROR FUNCTION!

Please have someone competent recreate this page. Your error function table of numerical values is WRONG, which is both shockingly inexcusable and could wreak havoc if people actually use it. You can easily verify it is wrong by checking any standard handbook, e. g. CRC handbook of chemistry and physics, CRC math handbook, Lange's handbook of chemistry.

Please fix it, and please permanently bar whomever posted it from contributing to Wikipedia. I realize from what I read here that quality control is anathema, but PLEASE, people really might use this to make important decisions!

Andy Cutler 184.78.143.36 (talk) 05:40, 14 June 2010 (UTC)

- The above is possibly a moron's joke; anyway the values reported in the article are correct, as anybody can check. --pma 18:27, 14 June 2010 (UTC)

I concur with Andy. Any fool with Mathematica (such as myself) can check the table in a matter of seconds. The table is correct. -B. Yencho —Preceding unsigned comment added by 72.33.79.184 (talk) 19:13, 13 August 2010 (UTC)

I believe that in practice there are multiple definitions for erf. For example, I have seen it defined with a 1/sqrt(2) out front, instead of a 2. Which way is 'right' probably depends on what field you work in or what book/software you are using. That should probably be mentioned in the article, just so people don't naively try to plug things into the table. 128.119.91.13 (talk) 18:52, 28 October 2010 (UTC)

- I was surprised to see a table like that here. Those used to be valuable pieces of information in the past, but now they only take up space. And if you do so, you will need to be consistent and do the same for https://en.wikipedia.org/wiki/Logarithm, https://en.wikipedia.org/wiki/Gamma_function, and https://en.wikipedia.org/wiki/Logistic_function Anne van Rossum (talk) 11:51, 19 December 2013 (UTC)

- I agree that tabulated values are a waste of space these days. — Steven G. Johnson (talk) 15:09, 19 December 2013 (UTC)

## Bounded function/s?

Sorry, there is no Spanish page for this article, so i'm forced to ask here =) erf and erfc are "Bounded functions"?? In the sense that, for instance -1<erf<1 and 0<erfc<2.. Is this true? Why not mention it?

I'm reading a text here, (Haykin's Communications Systems) that says that erfc is upper bounded by erfc(u)<exp(-u^2)/sqrt(pi*u) for huge positive values of "u" I don't see how this relates to the graphic in which the maximum value is just "2" for big negative arguments and "0" for big positive ones

Thanks very much, you all rock n' roll big time! Ugo O.

- Yes they are bounded (for real values of x at least) as well as a lot of other things that could also be mentioned but weren't.

- For large values of u erfc goes to zero and the formula you refer to seems (I didn't check) to give you an upper bound that shows how fast it is going to zero (which is very fast, as the second table in the article also shows). AlexFekken (talk) 10:51, 28 June 2012 (UTC)

## Non-elementary?

Hello. I see the error function is said to be "non-elementary". What does this mean, exactly? I was under the impression that the division of functions into elementary and special functions, and other categories, was pretty arbitrary. Maybe someone can clarify this point. Is there a more precise category that erf falls into? I'm grasping at straws here -- maybe some group or other algebraic structure? Happy editing, Wile E. Heresiarch 04:09, 17 Feb 2004 (UTC)

- Maybe it's somewhat arbitrary, but it means you can't express it in terms of the usual functions studied in first-year calculus by using the usual arithmetic operations and composition and inversion of functions. Michael Hardy 22:31, 17 Feb 2004 (UTC)

- Why is this? Perhaps somebody could add a proof. 21:44, 30 Oct 2004 (UTC)

- I agree: it would be a good article topic. I'm not up on the details. Maybe there's already such an article here; I'll see if I can find it. Michael Hardy 21:06, 30 Oct 2004 (UTC)
- It's really not that elusive. Just go to the wikipedia site "Elementary function".

- I agree: it would be a good article topic. I'm not up on the details. Maybe there's already such an article here; I'll see if I can find it. Michael Hardy 21:06, 30 Oct 2004 (UTC)

It means its antiderivative cannot be expressed as an elementary function. Think of a function who's derivative is the error function, (hint) there isn't one just like sin(x^2) that is why we use series to approximate these functions. — Preceding unsigned comment added by 75.110.96.120 (talk) 01:06, 20 January 2014 (UTC)

## Asymptotic expansion

A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large *x* is

where . This series diverges for every finite *x*. However, in practice only the first few terms of this expansion are needed to obtain a good approximation of erfc(*x*), whereas the Taylor series given above converges very slowly.

- How can it be useful, if it diverges ? Eregli bob (talk) 04:48, 18 June 2012 (UTC)

- I can't make sense of that at the moment, maybe I'm too sleepy... As far as I can tell, . Κσυπ
*Cyp*00:08, 4 Sep 2004 (UTC)

- What you're not making sense of is the word
*where*. The notation*n*!! does not mean the factorial of the factorial of*n*, but rather it means what it says it means after the word "where". That's what "where" means in this kind of mathematical jargon. The notation is obnoxious because*n*!!*ought*to mean the factorial of the factorial of*n*, but this and similar notation seem to have some currency. Michael Hardy 02:07, 5 Sep 2004 (UTC)

- What you're not making sense of is the word

- I'd say obnoxious is an understatement... I'm tempted just to declare everyone who uses the notation at "wrong", even if it's the rest of the planet... And the "notation" completely redundant in this article, at least... Maybe 2+2=5, since "+2" could be a special case which stands for "*2+1"... Thanks for the clarification... Κσυπ
*Cyp*06:45, 6 Sep 2004 (UTC)

- I'd say obnoxious is an understatement... I'm tempted just to declare everyone who uses the notation at "wrong", even if it's the rest of the planet... And the "notation" completely redundant in this article, at least... Maybe 2+2=5, since "+2" could be a special case which stands for "*2+1"... Thanks for the clarification... Κσυπ

- I don't think that the fact that you find the notation confusing is just reason to remove it from the article. Why? The notation is fairly standard, and wikipedia should be embracing and educating on mathematical standards, not hiding them. Wouldn't it be so much better to just add a sentence showing what the notation means? add a wikilink to the factorial article (which has a section on double factorials)? And by the way, the thing that you replaced the double factorial with is even less common. people who don't like the double factorial notation usually write it out as , so I think your edit is pretty biased. The idea that a notation can be "wrong" if everyone on the planet uses it?? -Lethe | Talk

- A problem with the notation is that it is meaningless for n=0. On the other hand, the standard (2n-1)!! notation is very well understood to be equal to one when n=0 (just as 0! is well understood to equal one). Using the former notation results in having to make the sum go from 1 to infinity rather than 0 to infinity. It then becomes necessary to add 1 and enclose the sum in square brackets. The entire equation is thus much simplified with !! notation. The current version of the equation that uses (2n)! could also be similarly simplifed to use a sum from 0 to infinity. (A purist might object that for x=0 and n=0 you get zero to the zero power in the denominator, but note that Wolfram does not worry about that. Also note that the existing text above the equation qualifies this as being "for large x.")--RichardMathews 20:36, 17 October 2006 (UTC)

- Is it possible that the sign of this expression is incorrect? If I take only the
*n*=0 term of the sum, I get - which has the wrong sign. Wilke 02:02, 2 Nov 2004 (UTC)

Here is a derivation of the asymptotic expansion of the error function (PDF-Proposition 2.10)136.142.141.195 (talk) 00:09, 9 April 2008 (UTC)

## Complementary versus invserse

Question: What is the relationship between the "complementary error function" and the "inverse error function"?

Ohanian 06:22, 2005 Apr 5 (UTC)

Answer: I'm not aware of any relationship between the two. The complementary error function is simply a scaled version of the error function to find the area under the tail of the gaussian pdf above the value *x*, rather than integrate between 0 and *x*. The inverse error function is what most people would expect an inverse function to be: erf^{-1}( erf( *x* ) ) = *x*. Bencope 18:15, 21 June 2006 (UTC)

## Erf is "evidently" odd

Erf *is odd*. Why use the word "evidently"?

We sometimes say something is "evidently" true when we make this assertion by observation instead of through some proof. I don't believe it matters whether it is included or not.jgoldfar (talk) 17:53, 4 May 2011 (UTC)

- You probably want to say "self-evident", then.Eregli bob (talk) 04:39, 18 June 2012 (UTC)

## If limits of error function changes?

What happen if the the limits of error function changes from -α to x.

- -lethe
^{talk}16:11, 24 December 2005 (UTC)

## Subscript?

Shouldn’t the lower subscript of the integral be negative infinity instead of 0? The picture implies it.

Fvanris 14:06, 30 January 2006 (UTC)

- I don't think so. The picture implies that the value at zero is zero, so then the limit of integration has to be 0, not -infinity, no? Oleg Alexandrov (talk) 05:05, 31 January 2006 (UTC)

## Why?

Does anyone know why it is called the error function? Is there something about it that I'm missing? —The preceding unsigned comment was added by 70.113.95.143 (talk • contribs) 06:24, 4 December 2006 (UTC).

- I'd be very interested to learn the answer for this.

- I'm not sure and I can't reference it, but I think the reason is that it is often useful to represent the probability of error in communication systems. The most common way to model noise is by a Gaussian distribution so, in order to calculate the probability of an error, you have to integrate the Gaussian function, thus getting expressions in terms of the error function. Some examples here in Wikipedia are Phase-shift keying, Amplitude-shift keying and Quadrature amplitude modulation. Alessio Damato 10:53, 22 February 2007 (UTC)

- Well, I don't have a refernce for you, but the article mentions that erf(\frac{a}{\sigma\sqrt{2}}) is the probability of a gaussian generated value to be within the range of +-a of the mean, doesn't it? So for a given error a, it gives you the likelyhood of that error. —Preceding unsigned comment added by 87.174.73.108 (talk) 23:47, 11 March 2008 (UTC)

## Approximation?

Should approximations for the error function be mentioned? for example, one I saw on the web is

By the way, how do I use latex on these pages?Hiiiiiiiiiiiiiiiiiiiii 02:16, 9 July 2007 (UTC)

- Just the way you see it done. Click on "edit this page" and you'll see it. Michael Hardy 03:50, 9 July 2007 (UTC)

- I should add that although "displayed" TeX looks very good on Wikipedia, inline TeX often gets misaligned or looks much bigger than the surrounding text, and is therefore often avoided. Michael Hardy 03:51, 9 July 2007 (UTC)

It seems to me such things could appropriately be included in the article. I'd write

instead of trying to fit that big fraction into a superscript. Michael Hardy 19:14, 9 July 2007 (UTC)

- The formula looks cool, but how good is it? Can we add an error term ( or so)? Obviously, it is valid only for large positive , but this is not said in the text. Wouldn't it be good to add a reference? —Preceding unsigned comment added by 80.121.27.224 (talk) 17:39, 3 November 2007 (UTC)

I dislike the fit mentioned until Hiii specifies the range of approximation and precision and/or indicates the source. (If you walk on the street and see the sanwich at the pavement, do not hurry to eat it. Bring it first to the lab of biochemical analysis.) dima (talk) 06:37, 14 July 2008 (UTC)

P.S. The approximaiton Hiiii wrote is poor, only 2 correct decimal digits. If you want smooth aproximation of erf for all positive values of argument, I suggest . Copyleft 2008 by dima (talk) 12:40, 14 July 2008 (UTC)

I think the sign of the approximation currently given is wrong, as in it should be negative when x is negative (currently it is always non-negative). I'm not positive of this (otherwise I'd add an x/|x| to the approximation myself), so maybe someone who can verify it would like to make the change. Austin Parker (talk) 19:56, 14 January 2010 (UTC)

I am myself no mathematician, but the error function reminds me very much on the logistic function. Is there any way to approximate it through this?
I mean something like erf(x) = 1/(1+exp(-x*const)). 178.82.219.114 (talk) 07:58, 17 June 2010 (UTC)

In the article we read: "Such a fit gives at least one correct decimal digit of function erf in vicinity of the real axis. Using a ≈ 0.140012, the largest error of the approximation is about 0.00012.[2]" But I ask... ONLY ONE CORRECT DECIMAL DIGIT and maximum error 0.00012? Strange! And in the quoted reference we learn that the formula provides an approximation correct do better than 4*10^-4 in relative precision. by Alexor —Preceding unsigned comment added by 151.76.71.189 (talk) 19:39, 29 March 2011 (UTC)

- The second sentence used to say 0.147, but someone incorrectly changed it (not realizing that it was intentionally different from the expression whose value is near 0.140012). I've rewritten both sentences to reflect what the reference says. Joule36e5 (talk) 09:55, 31 March 2011 (UTC)

There are well known approximations for erf that are better than any of these. I've added them to the article, with the reference to Abramowitz and Stegun. (They cite an even earlier source, but A&S has the advantage of being easily available online, as well as being a widely used reference.) I've left the other approximation for the moment, but I suggest removing it. The source for it no longer seems to be available (it was just a pdf on someone's home page), and it really isn't a very good approximation. As far as I can tell, the author simply didn't realize that better approximations (faster, more accurate) had been known for half a century. (Pkeastman (talk) 19:36, 6 September 2011 (UTC))

## On alternate forms of error function for improving article

I'm studying and I really had a problem with this function. When I learned lesson from subject "Basis of telecommunications and data transfers" we used that function in some analyzing. In my notes stand similar function called error function but differing in product 1/sqrt(pi), and integrating borders were from minus infinity to x. I tried to make analysis with such function and failed. I turned to Internet I find form like one on Wikipedia. (I used that form and successfully made analyze) Nowhere professor's form. I gone to him and tell him, but he is refusing. I realized that there are diverse forms of this function. Then I look to some paper literature of base subject and really find professor's definition and similar ones. So if someone else find those variants also we can add them to article in separate section to avoid confusing others who can be in similar situation as I am.

Čikić Dragan 19:25, 2 October 2007 (UTC)

Revised by --Čikić Dragan (talk) 17:38, 9 June 2008 (UTC)

## ierfc: Integral of the error function complement

I've recently come across some references to a function ierfc in Crank (1975, the mathematics of diffusion). I couldn't find anything on ierfc in Wolfram/Mathematica, but I found a few odd references, including in Abramovitz and Stegun. Apparently ierfc is the integral of the erfc.

The easy formula for ierfc is
ierfc(x) = [exp(-x^{2})/sqrt(pi)] - x erfc(x)

(sorry, I don't know LaTex).

I don't think there should be an additional article, but I suggest that (1) searches for ierfc be directed here, and (2) there be a brief mention of ierfc and its definition.

129.186.185.139 14:16, 17 October 2007 (UTC)Toby, ewing@iastate.edu

I agree with the above- ierfc isn't defined anywhere! -AJW

## confused

Maybe be an obvious Q, but what does 't' represent in the definition of erf(x) —Preceding unsigned comment added by 88.110.201.64 (talk) 03:55, 22 October 2007 (UTC)

- It is the variable of integration, sometimes called a dummy variable as it doesn't actually represent a quantity. See the Integral article for details. Blair - Speak to me 03:38, 30 October 2007 (UTC)

There's an article about that concept: free variables and bound variables. Michael Hardy 04:53, 30 October 2007 (UTC)

- Thanks for that link - it explains the concept much better than I could ever hope to! Blair - Speak to me 05:53, 30 October 2007 (UTC)

## Integral of a normal distrubution

In the definition of the error function, perhaps it should be made clear that it applies to the integral of a normal distribution, i.e. a *normalised* Gaussian function. It is defined more clearly here (http://mathworld.wolfram.com/Erf.html). I'm not a mathematician, but I'm guessing the error function and all its approximations would not work if your integrand was not normalised.

Dieode 10:18, 29 October 2007 (UTC)

## Limits of the error function

Shouldn't "The error function at infinity is exactly 1" be stated as the limit as the error function approaches infinity is 1? —Preceding unsigned comment added by 65.184.155.154 (talk) 19:36, 4 December 2007 (UTC)

## Relation to moment generating function for the Rayleigh distribution

Would it perhaps be relevant in the applications section to mention that the error function pops up in the moment generating function for the Rayleigh distribution, which is the distribution of the magnitude of a twodimensional vector whose components are uncorrelated and each has a Gaussian distribution with identical variances. Relevant for, e.g., the statistical descrip of wind speed? -- Slaunger (talk) 11:55, 27 February 2008 (UTC)

## Part of C99 standard?

The article claims that erf/erfc exist in GNU libc, but aren't part of any standard. I've come across references that claimed erf/erfc are part of the C99 ISO standard. —Preceding unsigned comment added by 87.174.73.108 (talk) 23:43, 11 March 2008 (UTC)

- Yes, indeed they are functions in <math.h> in C99. Oli Filth
^{(talk)}00:17, 9 April 2008 (UTC)

## Inverse erfc?

No approximation is listed for the inverse of erfc, I suggest at least including erfcinv(z)=erfinv(1-z), though it seems a bit trivial. —Preceding unsigned comment added by 201.174.192.4 (talk) 18:02, 31 March 2008 (UTC)

## representation through Gamma funciton

How many arguments does Gamma function have in the representation of erf sugested? Once it appear with single argument, then, with two argumetns. In the definition, it appears with single argument. How to correct this? dima (talk) 04:14, 14 July 2008 (UTC)

## Some errors in the Gamma function expression of the generalised error functions

A minor error is that, as raised in the above section, it mixed the gamma function, which takes one argument, and incomplete gamma function, which takes two arguments. Although we can think the ordinary gamma function as incomplete gamma function with scale=1, it should be mentioned somehow.

Moreover, another obvious major problem is that this expression is not correct, in that, simply in the formula, and . It seems not true. But I have on way to find the correct formula. Please... 193.10.97.31 (talk) 16:16, 3 December 2008 (UTC)

I have edited the main article by myself concerning these issues. The formula is correct according to the numerical integration. But since the product is always equal to 1, it has been taken away. In addition, after the modification it is easier to see the following result in the article follows since and . 193.10.97.31 (talk) 22:17, 3 December 2008 (UTC)

## C-like source for approximation

Regarding the comment by User:Lklundin (here, in the edit summary), yes, the implementation is my own, but it is rather trivial and can probably be found (in spirit) in any decent book on numerical analysis.

As requested, I will flesh it out a bit and fix some minor issues (*e.g.* will not converge for large `z`

) and try to find a reference for it.

Cheers, **pedrito** - **talk** - 20.02.2009 07:34

- Yes, OK. But is this C-code really useful? How is it known that it computes erf(z) to machine precision? Will its result vary with level of compiler optimization? Bear in mind that C99 actually defines erf(), so no one would actually want to use the code to "naively compute erf(z)". So what use does an original-research C-implementation of the series expansion have? Implementation issues regarding finite precision and finite speed of computation are not (properly) addressed. I would support the removal of the code - or replacement quoting a "decent book on numerical analysis" as mentioned above. Lklundin (talk) 10:23, 20 February 2009 (UTC)

- It's defined in C99 as a library function, which has to be implemented by somebody, somewhere... Just because it already exists somewhere, it doesn't mean we shouldn't document it -- this is an encyclopaedia, not a programming handbook.
- As for the machine precision, this is guaranteed by
`an`

decreasing monotonically as of some`i`

(as of`i=0`

for`z<=1`

) and`res + an == res`

,*i.e.*the "correction"`an`

no longer contributes to the result`res`

. - The implementation follows Abramowitz and Stegun 7.1.5 and is implemented as such by the GNU Scientific Library. The source code, which is in essence the same as the snipplet I added, can be found here.
- Cheers,
**pedrito**-**talk**- 20.02.2009 11:45- OK, so to sum it up: The unsourced code purports to be an original-research adaptation with no reliable source for its performance. I will remove it along with its repeated Taylor expansion. If a code is really needed (and I don't see a need for a C implementation, since erf(x) is already defined in C), then reinsert it only with a proper source. Thanks. (Btw, I think your argument about machine precision ensured by res + an == res is wrong. The result, res, would need to be unchanged for _all_ contributions a_n, a_{n+1}, a_{n+2},..., i.e. res == res + (an + an1 + an2 + ...). But I digress, this article and its talk page concerns reliable sources about erf(x), not what some wikipedians think about erf(x)). Lklundin (talk) 22:33, 21 February 2009 (UTC)
- If that C code had been present I would have read it and understood what this article is about. Instead I see a wall of mathematical symbols, I glaze over, and move on. "Thanks". 81.131.42.253 (talk) 00:25, 2 June 2013 (UTC)

- OK, so to sum it up: The unsourced code purports to be an original-research adaptation with no reliable source for its performance. I will remove it along with its repeated Taylor expansion. If a code is really needed (and I don't see a need for a C implementation, since erf(x) is already defined in C), then reinsert it only with a proper source. Thanks. (Btw, I think your argument about machine precision ensured by res + an == res is wrong. The result, res, would need to be unchanged for _all_ contributions a_n, a_{n+1}, a_{n+2},..., i.e. res == res + (an + an1 + an2 + ...). But I digress, this article and its talk page concerns reliable sources about erf(x), not what some wikipedians think about erf(x)). Lklundin (talk) 22:33, 21 February 2009 (UTC)

## Convolved step?

Is the error function just a convolution of a step function (-1 : x<0, 0 : x==0, 1 : x > 0) with a Gaussian kernel? Numerically that looks right. It also makes sense in that erf is the integral of a Gaussian and convolution with a step gives you integration. If so, that should be mentioned, because it's a very easy way to think about erf. —Ben FrantzDale (talk) 05:30, 1 August 2009 (UTC)

## Repeated integration

I just came across the need to repeatedly integrate the error function. Eventually I arrived at this simple recursion relation (note that I used Upsilon only because I'd never seen it used before):

Assuming I got it right, is it worth including? I certainly would have found it useful ;) —Preceding unsigned comment added by Bb vb (talk • contribs) 01:56, 14 October 2009 (UTC)

- Actually if you integrate on [0,x] you should find one more term

- So there should be a closed formula of the form

- with certain polynomials and however I'm not quite sure that the iterated integrals of erf(x) may really be of some interest here.--pma 10:12, 18 June 2010 (UTC).

## Approximation with elementary functions

i agree with Michael Hardy. this Approximation is crap. the error for x=3.5 is about 0.345!!!!! someone find a better one? —Preceding unsigned comment added by 46.116.223.100 (talk) 17:28, 5 March 2011 (UTC)

- I don't have anything off-hand, but I'd check out this section for any hints, in particular to Hart (1968) (I don't have any access to it) and West (2009).
`+mt`19:58, 29 March 2011 (UTC)

## Graph contradicts formula

I think the integral definition (first formula) of erf contradicts the graph given next to it. How can erf assume negative values while exp(-x^2) is a positive function? Am I missing something? Does it imply integration from 0 to negative values (reversed bounds)?— Preceding unsigned comment added by 88.230.219.120 (talk) 19:59, 23 June 2011 (UTC)

- You seem to have found the answer yourself: the integration starts at t = 0. --catslash (talk) 20:53, 23 June 2011 (UTC)

I agree with the previous comment, as of now the integrand in the formula (exp(-t^2)) is strictly positive. Thus the plot on the right side presenting negative values does not match with the formula... — Preceding unsigned comment added by 130.15.148.161 (talk) 12:36, 19 July 2011 (UTC)

- The anons are right -- the article is self-contradictory. It says:

- (a) The lede says
*It is defined as:*

- (a) The lede says

- Unless (as suggested by an anon above) the notation is intended to allow reversed bounds, which I doubt, then this says that erf(x) is only defined for x≥0 and only takes on non-negative values.

- (b) The section
*The name "error function"*says

- (b) The section

*the error function gives the probability that a measurement, under the influence of accidental errors, has a distance less than x from the average value at the center.*

- Since probabilities are non-negative, this says that erf(x) is non-negative, agreeing with (a) above.

- (c) The graph in the lede is entitled
*Plot of the error function*, and the plot goes from x = minus infinity to infinity and from erf(x) = -1 to +1, thus disagreeing with (a) and (b).

- (c) The graph in the lede is entitled

- (d)The section
*Properties*says:

- (d)The section

*The error function is odd:*

- This says that both positive and negative values of z are in the function's domain, and both positive and negative values of erf(z) can occur, thus agreeing with (d) but disagreeing with (a) and (b).

- Is it possible that there are two different widely used definitions of "error function" that are being mixed together here? Can someone sort this out? Duoduoduo (talk) 23:22, 20 January 2012 (UTC)

- I looked it up in a source, and it confirms that the error function is non-negative and defined over non-negative values of
*x*. I put in corrections and gave the source.

- I looked it up in a source, and it confirms that the error function is non-negative and defined over non-negative values of

- Two problems remain: (1) The graph in the lede is apparently a graph of the cdf of the standard normal, not a graph of the erf. I changed the caption accordingly, but unfortunately the vertical axis is wrongly labeled erf(x). Does anyone mind if we remove the graph? Alternatively, does anyone know how to go into the graph and relabel the vertical axis as ?

- Or, is there some definition of the error function (source, please) that defines it over negative values according to the odd function formula, in which case the graph could be right with the original caption? (But then the passage in the section
*The name "error function"*saying the erf values give probabilities would make no sense.)

- Or, is there some definition of the error function (source, please) that defines it over negative values according to the odd function formula, in which case the graph could be right with the original caption? (But then the passage in the section

- (2)The Properties section still says
*The error function is odd*:

- (2)The Properties section still says

- The integral definition is valid for all (finite) complex
*z*, including negative-real values (see for example Abramowitz and Stegun). Of course, some readers may be surprised by a negative number in a context where they have not previously encountered one, but this is no reason to say the definition is restricted to positive, or even to real arguments - it isn't. --catslash (talk) 11:46, 21 January 2012 (UTC)

- The integral definition is valid for all (finite) complex

- Okay, so how should we proceed with this? (The reason I said it's restricted to positive was that I found a source that says it, but you have a source with the broader definition.) The first equation in the article defines it as the integral from zero to
*x*; is this, as the commenter above asked, allowing interpretation as a reverse integral whereby an integral from zero to something negative is defined as minus the integral from zero to the absolute value of the negative thing? If so, this should be clarified when the first equation is given. (And the first figure's caption should be restored to the original.) And the last paragraph in the section*The name "error function"*, which interprets erf as a probability -- is that right whenever x>0? Duoduoduo (talk) 13:25, 21 January 2012 (UTC)

- Okay, so how should we proceed with this? (The reason I said it's restricted to positive was that I found a source that says it, but you have a source with the broader definition.) The first equation in the article defines it as the integral from zero to

- A quick trawl of Google Books shows that in certain applications (such as economics), erf(
*x*) is only of interest for positive real*x*. However it would be better for an article about a special function to use general mathematics texts as sources whenever possible. An ideal source would be A&S which allows the argument to be complex, but unfortunately does not state this explicitly. Andrews*Special functions of mathematics for engineers*, does explicitly state that the argument is any finite positive or negative real. --catslash (talk) 23:34, 21 January 2012 (UTC)

- A quick trawl of Google Books shows that in certain applications (such as economics), erf(

- There's no mathematical reason to require that
*x*in the integral definition be greater than zero, or even to be real. Probably best to remove the restriction, change the initial citation to Abramowitz and Stegun, but then say that in some applications only positive real values of the argument are considered - and give your economics ref for that. - Having the lower bound of the integral greater than the upper bound is no different to subtracting a larger number from a lesser one: it's not always feasible when the numbers quantify physical objects, but it's commonly accepted in arithmetic - to the point where few people would consider it needed special
*interpretation*or explanation (though the*negative number*article says there were dissenters on this issue as late as the 18th century~~(!)~~). --catslash (talk) 00:10, 4 February 2012 (UTC)

- There's no mathematical reason to require that

The analogy is not valid. The debate about negative numbers was a philosophical debate about whether they can be said to exist. Here it's just a matter of how notation is defined -- with, or without, any definition for integrals from *a* to *b<a*.

A key principle is that the lede of a Wikipedia article should be accessible to as many people as possible, consistent with providing a legitimate summary of the article's content. The current version accomplishes that, while your suggested version would not alter the substance of the lede but would be inaccessible to the many people who are very familiar with the concept of an integral but have never seen them defined with a lower bound greater than the upper bound. Duoduoduo (talk) 18:09, 4 February 2012 (UTC)

- Yes, you are right: whether something can be evaluated and whether the result is meaningful are different questions. I would like to reply further on this matter - but on your talk-page as it is slightly off-topic here.

- Yes, the lede (and the rest of the article) should be as accessible as possible, as far as is consistent with being roughly correct. However (1) it is not really correct as it stands because it suggests that the integral is only defined for positive real
*x*, which isn't right, (2) any reader who*does*assume that*x*must be positive real, would ipso facto not be thrown by the absence of a statement to that effect and (3) the current version misrepresents the source (Andrews). Andrews does not use the Taylor series to analytically extend the integral from the positive reals to the negative reals, but rather defines the integral for all reals and uses the Taylor series to demonstrate that it is an odd function of*x*. --catslash (talk) 01:20, 7 February 2012 (UTC)

## Double factorial

Should we add a note to the section Asymptotic Expansion that "!!" is the double factorial and not the factorial of the factorial? RJFJR (talk) 16:50, 7 February 2012 (UTC)

## Imaginary and Complex error function

The definition of these two in the opening section is not very clear. erfi(z) = -i.erf(iz)

What is z here ? A complex number ? A purely imaginary number ? How can you evaluate erf(iz) ?Eregli bob (talk) 04:43, 18 June 2012 (UTC)

- Since erf (and hence erfi) are analytic,
*z*could be, real, purely imaginary or generally complex as you wish. However, it's likely that erfi() has been invented for convenience in the circumstance that*z*is real - in which case erfi(*z*) is also real. It's a bit like sinh(*x*) = -*i*sin(*ix*) in the case where*x*is real, and sinh(*x*) (unlike -i sin(*ix*)) can be evaluated without recourse to complex arithmetic. If the*z*in the article is confusing, then just change it to*x*.

- You can evaluate -
*i*erf(*iz*) using the Taylor series, which as the article points out is convergent for all real and complex*z*. But perhaps you are unfamiliar with complex arithmetic? If you want to evaluate erfi(2), simply substitute*z*= 2*i*into the Taylor expansion and whenever you get*i*^{2}replace it with a -1. Since all the powers of*z*are odd, you'll end up with a multiple of*i*- a purely imaginary value. Multiplying this by the -*i*from the erfi() definition gives you a real value.

- erfi() seems to be a somewhat obscure function; Googling it mainly gives stuff about the Mathmatica computer program. --catslash (talk) 19:15, 18 June 2012 (UTC)

- erfi shows up, for example, in some PDE solutions. Note that a Taylor series is generally a terrible (slow) way to evaluate special functions; it's just the only method that most people without a background in numerical analysis have heard of. There are various methods to compute erfi quickly and accurately, typically using a combination of different polynomial, rational, or continued-fraction approximations in different regions. (For example, here is one package that computes erfi and other functions in the complex plane.) Computationally, erfi(x) has the drawback that it grows roughly as exp(x^2), which quickly leads to arithmetic overflow; an alternative is to compute the Dawson function, which is essentially exp(-x^2)erfi(x) to remove this exponential factor. — Steven G. Johnson (talk) 15:07, 19 December 2013 (UTC)