# Integration by substitution

{{#invoke: Sidebar | collapsible }}

In calculus, integration by substitution, also known as u-substitution, is a method for finding integrals. Using the fundamental theorem of calculus often requires finding an antiderivative. For this and other reasons, integration by substitution is an important tool for mathematicians. It is the counterpart to the chain rule of differentiation.

## Substitution for single variable

### Relation to the fundamental theorem of calculus

Let I ⊆ R be an interval and φ : [a,b] → I be a continuously differentiable function. Suppose that ƒ : I → R is a continuous function. Then

${\displaystyle \int _{\phi (a)}^{\phi (b)}f(x)\,dx=\int _{a}^{b}f(\phi (t))\phi '(t)\,dt.}$

Using Leibniz's notation: the substitution x = φ(t) yields {{ safesubst:#invoke:Unsubst||\$B=dx/dt}} = φ′(t) and thus, formally, dx = φ′(t) dt, which is the required substitution for dx. (One could view the method of integration by substitution as a major justification of Leibniz's notation for integrals and derivatives.)

The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be used from left to right or from right to left in order to simplify a given integral. When used in the latter manner, it is sometimes known as u-substitution or w-substitution.

Integration by substitution can be derived from the fundamental theorem of calculus as follows. Let ƒ and φ be two functions satisfying the above hypothesis that ƒ is continuous on I and φ is continuous on the closed interval [a,b]. Then the function ƒ(φ(t))φ′(t) is also continuous on [a,b]. Hence the integrals

${\displaystyle \int _{\phi (a)}^{\phi (b)}f(x)\,dx}$

and

${\displaystyle \int _{a}^{b}f(\phi (t))\phi '(t)\,dt}$

in fact exist, and it remains to show that they are equal.

Since ƒ is continuous, it possesses an antiderivative F. The composite function Fφ is then defined. Since F and φ are differentiable, the chain rule gives

${\displaystyle (F\circ \phi )'(t)=F'(\phi (t))\phi '(t)=f(\phi (t))\phi '(t).}$

Applying the fundamental theorem of calculus twice gives

{\displaystyle {\begin{aligned}\int _{a}^{b}f(\phi (t))\phi '(t)\,dt&=\int _{a}^{b}(F\circ \phi )'(t)\,dt\\&=(F\circ \phi )(b)-(F\circ \phi )(a)\\&=F(\phi (b))-F(\phi (a))\\&=\int _{\phi (a)}^{\phi (b)}f(x)\,dx,\end{aligned}}}

which is the substitution rule.

### Examples

Consider the integral

${\displaystyle \int _{0}^{2}x\cos(x^{2}+1)\,dx}$

If we apply the formula from right to left and make the substitution u = ϕ(x) = x2 + 1, we obtain du = 2x dx and hence; x dx = ½du

(1) Definite integral

{\displaystyle {\begin{aligned}\int _{x=0}^{x=2}x\cos(x^{2}+1)\,dx&{}={\frac {1}{2}}\int _{u=1}^{u=5}\cos(u)\,du\\&{}={\frac {1}{2}}(\sin(5)-\sin(1)).\end{aligned}}}

It is important to note that since the lower limit x = 0 was replaced with u = 02 + 1 = 1, and the upper limit x = 2 replaced with u = 22 + 1 = 5, a transformation back into terms of x was unnecessary.

For the integral

${\displaystyle \int _{0}^{1}{\sqrt {1-x^{2}}}\;dx}$

the formula needs to be used from left to right: the substitution x = sin(u), dx = cos(udu is useful, because ${\displaystyle {\sqrt {(1-\sin ^{2}(u))}}=\cos(u)}$:

${\displaystyle \int _{0}^{1}{\sqrt {1-x^{2}}}\;dx=\int _{0}^{\frac {\pi }{2}}{\sqrt {1-\sin ^{2}(u)}}\cos(u)\;du=\int _{0}^{\frac {\pi }{2}}\cos ^{2}(u)\;du={\frac {\pi }{4}}}$

The resulting integral can be computed using integration by parts or a double angle formula followed by one more substitution. One can also note that the function being integrated is the upper right quarter of a circle with a radius of one, and hence integrating the upper right quarter from zero to one is the geometric equivalent to the area of one quarter of the unit circle, or π/4.

(2) Antiderivatives

Substitution can be used to determine antiderivatives. One chooses a relation between x and u, determines the corresponding relation between dx and du by differentiating, and performs the substitutions. An antiderivative for the substituted function can hopefully be determined; the original substitution between u and x is then undone.

Similar to our first example above, we can determine the following antiderivative with this method:

{\displaystyle {\begin{aligned}&{}\quad \int x\cos(x^{2}+1)\,dx={\frac {1}{2}}\int 2x\cos(x^{2}+1)\,dx\\&{}={\frac {1}{2}}\int \cos u\,du={\frac {1}{2}}\sin u+C={\frac {1}{2}}\sin(x^{2}+1)+C\end{aligned}}}

where C is an arbitrary constant of integration.

Note that there were no integral boundaries to transform, but in the last step we had to revert the original substitution u = x2 + 1.

## Substitution for multiple variables

One may also use substitution when integrating functions of several variables. Here the substitution function (v1,...,vn) = φ(u1, ..., un ) needs to be injective and continuously differentiable, and the differentials transform as

${\displaystyle dv_{1}\cdots dv_{n}=|\det(\operatorname {D} \phi )(u_{1},\ldots ,u_{n})|\,du_{1}\cdots du_{n}}$

where det(Dφ)(u1, ..., un ) denotes the determinant of the Jacobian matrix containing the partial derivatives of φ. This formula expresses the fact that the absolute value of the determinant of a matrix equals the volume of the parallelotope spanned by its columns or rows.

More precisely, the change of variables formula is stated in the next theorem:

Theorem. Let U be an open set in Rn and φ : URn an injective differentiable function with continuous partial derivatives, the Jacobian of which is nonzero for every x in U. Then for any real-valued, compactly supported, continuous function f, with support contained in φ(U),

${\displaystyle \int _{\phi (U)}f(\mathbf {v} )\,d\mathbf {v} =\int _{U}f(\phi (\mathbf {u} ))\left|\det(\operatorname {D} \phi )(\mathbf {u} )\right|\,d\mathbf {u} .}$

The conditions on the theorem can be weakened in various ways. First, the requirement that φ be continuously differentiable can be replaced by the weaker assumption that φ be merely differentiable and have a continuous inverse Template:Harv. This is guaranteed to hold if φ is continuously differentiable by the inverse function theorem. Alternatively, the requirement that Det(Dφ)≠0 can be eliminated by applying Sard's theorem Template:Harv.

For Lebesgue measurable functions, the theorem can be stated in the following form Template:Harv:

Theorem. Let U be a measurable subset of Rn and φ : URn an injective function, and suppose for every x in U there exists φ'(x) in Rn,n such that φ(y) = φ(x) + φ'(x) (yx) + o(||yx||) as yx. Then φ(U) is measurable, and for any real-valued function f defined on φ(U),

${\displaystyle \int _{\phi (U)}f(v)\,dv\;=\;\int _{U}f(\phi (u))\;\left|\det \phi '(u)\right|\,du}$

in the sense that if either integral exists (or is properly infinite), then so does the other one, and they have the same value.

Another very general version in measure theory is the following Template:Harv:

Theorem. Let X be a locally compact Hausdorff space equipped with a finite Radon measure μ, and let Y be a σ-compact Hausdorff space with a σ-finite Radon measure ρ. Let φ : X → Y be a continuous and absolutely continuous function (where the latter means that ρ(φ(E)) = 0 whenever μ(E) = 0). Then there exists a real-valued Borel measurable function w on X such that for every Lebesgue integrable function f : Y → R, the function (f  ${\displaystyle \circ }$ φ)w is Lebesgue integrable on X, and

${\displaystyle \int _{Y}f(y)\,d\rho (y)=\int _{X}f\circ \varphi (x)w(x)\,d\mu (x).}$

Furthermore, it is possible to write

${\displaystyle w(x)=g\circ \varphi (x)}$

for some Borel measurable function g on Y.

In geometric measure theory, integration by substitution is used with Lipschitz functions. A bi-Lipschitz function is a Lipschitz function φ : URn which is one-to-one, and such that its inverse function φ−1 : φ(U) → U is also Lipschitz. By Rademacher's theorem a bi-Lipschitz mapping is differentiable almost everywhere. In particular, the Jacobian determinant of a bi-Lipschitz mapping det D'φ is well-defined almost everywhere. The following result then holds:

Theorem. Let U be an open subset of Rn and φ : URn be a bi-Lipschitz mapping. Let f : φ(U) → R be measurable. Then

${\displaystyle \int _{U}(f\circ \phi )|\det D\phi |=\int _{\phi (U)}f}$

in the sense that if either integral exists (or is properly infinite), then so does the other one, and they have the same value.

The above theorem was first proposed by Euler when he developed the notion of double integrals in 1769. Although generalized to triple integrals by Lagrange in 1773, and used by Legendre, Laplace, Gauss, and first generalized to n variables by Mikhail Ostrogradski in 1836, it resisted a fully rigorous formal proof for a surprisingly long time, and was first satisfactorily resolved 125 years later, by Élie Cartan in a series of papers beginning in the mid-1890s (Template:Harnvb; Template:Harvnb).

## Application in probability

Substitution can be used to answer the following important question in probability: given a random variable ${\displaystyle X}$ with probability density ${\displaystyle p_{x}}$ and another random variable ${\displaystyle Y}$ related to ${\displaystyle X}$ by the equation ${\displaystyle y=\phi (x)}$, what is the probability density for ${\displaystyle Y}$?

It is easiest to answer this question by first answering a slightly different question: what is the probability that ${\displaystyle Y}$ takes a value in some particular subset ${\displaystyle S}$? Denote this probability ${\displaystyle P(Y\in S)}$. Of course, if ${\displaystyle Y}$ has probability density ${\displaystyle p_{y}}$ then the answer is

${\displaystyle P(Y\in S)=\int _{S}p_{y}(y)\,dy,}$

but this isn't really useful because we don't know py; it's what we're trying to find in the first place. We can make progress by considering the problem in the variable ${\displaystyle X}$. ${\displaystyle Y}$ takes a value in S whenever X takes a value in ${\displaystyle \phi ^{-1}(S)}$, so

${\displaystyle P(Y\in S)=\int _{\phi ^{-1}(S)}p_{x}(x)\,dx.}$

Changing from variable x to y gives

${\displaystyle P(Y\in S)=\int _{\phi ^{-1}(S)}p_{x}(x)~dx=\int _{S}p_{x}(\phi ^{-1}(y))~\left|{\frac {d\phi ^{-1}}{dy}}\right|~dy.}$

Combining this with our first equation gives

${\displaystyle \int _{S}p_{y}(y)~dy=\int _{S}p_{x}(\phi ^{-1}(y))~\left|{\frac {d\phi ^{-1}}{dy}}\right|~dy}$

so

${\displaystyle p_{y}(y)=p_{x}(\phi ^{-1}(y))~\left|{\frac {d\phi ^{-1}}{dy}}\right|.}$

In the case where ${\displaystyle X}$ and ${\displaystyle Y}$ depend on several uncorrelated variables, i.e. ${\displaystyle p_{x}=p_{x}(x_{1}\ldots x_{n})}$, and ${\displaystyle y=\phi (x)}$, ${\displaystyle p_{y}}$ can be found by substitution in several variables discussed above. The result is

${\displaystyle p_{y}(y)=p_{x}(\phi ^{-1}(y))~\left|\det \left[D\phi ^{-1}(y)\right]\right|.}$

## References

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.

• {{#invoke:citation/CS1|citation

|CitationClass=citation }}.