Laplace transform
The Laplace transform is a widely used integral transform in mathematics and electrical engineering named after PierreSimon Laplace (Template:IPAcen) that transforms a function of time into a function of complex frequency. The inverse Laplace transform takes a complex frequency domain function and yields a function defined in the time domain. The Laplace transform is related to the Fourier transform, but whereas the Fourier transform expresses a function or signal as a superposition of sinusoids, the Laplace transform expresses a function, more generally, as a superposition of moments. Given a simple mathematical or functional description of an input or output to a system, the Laplace transform provides an alternative functional description that often simplifies the process of analyzing the behavior of the system, or in synthesizing a new system based on a set of specifications.^{[1]} So, for example, Laplace transformation from the time domain to the frequency domain transforms differential equations into algebraic equations and convolution into multiplication.
History
The Laplace transform is named after mathematician and astronomer PierreSimon Laplace, who used a similar transform (now called z transform) in his work on probability theory. The current widespread use of the transform came about soon after World War II although it had been used in the 19th century by Abel, Lerch, Heaviside, and Bromwich.
From 1744, Leonhard Euler investigated integrals of the form
as solutions of differential equations but did not pursue the matter very far.^{[2]} Joseph Louis Lagrange was an admirer of Euler and, in his work on integrating probability density functions, investigated expressions of the form
which some modern historians have interpreted within modern Laplace transform theory.^{[3]}^{[4]}Template:Clarify
These types of integrals seem first to have attracted Laplace's attention in 1782 where he was following in the spirit of Euler in using the integrals themselves as solutions of equations.^{[5]} However, in 1785, Laplace took the critical step forward when, rather than just looking for a solution in the form of an integral, he started to apply the transforms in the sense that was later to become popular. He used an integral of the form:
akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the same way and started to derive some of its properties, beginning to appreciate its potential power.^{[6]}
Laplace also recognised that Joseph Fourier's method of Fourier series for solving the diffusion equation could only apply to a limited region of space as the solutions were periodic. In 1809, Laplace applied his transform to find solutions that diffused indefinitely in space.^{[7]}
Formal definition
The Laplace transform of a function f(t), defined for all real numbers t ≥ 0, is the function F(s), defined by:
The parameter s is the complex number frequency:
Other notations for the Laplace transform include or alternatively instead of F.
The meaning of the integral depends on types of functions of interest. A necessary condition for existence of the integral is that f must be locally integrable on Template:Closedopen. For locally integrable functions that decay at infinity or are of exponential type, the integral can be understood as a (proper) Lebesgue integral. However, for many applications it is necessary to regard it as a conditionally convergent improper integral at ∞. Still more generally, the integral can be understood in a weak sense, and this is dealt with below.
One can define the Laplace transform of a finite Borel measure μ by the Lebesgue integral^{[8]}
An important special case is where μ is a probability measure or, even more specifically, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a distribution function f. In that case, to avoid potential confusion, one often writes
where the lower limit of 0^{−} is shorthand notation for
This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform.
Probability theory
In pure and applied probability, the Laplace transform is defined as an expected value. If X is a random variable with probability density function f, then the Laplace transform of f is given by the expectation
By abuse of language, this is referred to as the Laplace transform of the random variable X itself. Replacing s by −t gives the moment generating function of X. The Laplace transform has applications throughout probability theory, including first passage times of stochastic processes such as Markov chains, and renewal theory.
Of particular use is the ability to recover the cumulative distribution function of a continuous random variable X by means of the Laplace transform as follows^{[9]}
Bilateral Laplace transform
{{#invoke:mainmain}}
When one says "the Laplace transform" without qualification, the unilateral or onesided transform is normally intended. The Laplace transform can be alternatively defined as the bilateral Laplace transform or twosided Laplace transform by extending the limits of integration to be the entire real axis. If that is done the common unilateral transform simply becomes a special case of the bilateral transform where the definition of the function being transformed is multiplied by the Heaviside step function.
The bilateral Laplace transform is defined as follows:
Inverse Laplace transform
Template:Rellink Two integrable functions have the same Laplace transform only if they differ on a set of Lebesgue measure zero. This means that, on the range of the transform, there is an inverse transform. In fact, besides integrable functions, the Laplace transform is a onetoone mapping from one function space into another in many other function spaces as well, although there is usually no easy characterization of the range. Typical function spaces in which this is true include the spaces of bounded continuous functions, the space L^{∞}(0,∞), or more generally tempered functions (that is, functions of at worst polynomial growth) on (0,∞). The Laplace transform is also defined and injective for suitable spaces of tempered distributions.
In these cases, the image of the Laplace transform lives in a space of analytic functions in the region of convergence. The inverse Laplace transform is given by the following complex integral, which is known by various names (the Bromwich integral, the FourierMellin integral, and Mellin's inverse formula):
where γ is a real number so that the contour path of integration is in the region of convergence of F(s). An alternative formula for the inverse Laplace transform is given by Post's inversion formula. The limit here is interpreted in the weak* topology.
In practice it is typically more convenient to decompose a Laplace transform into known transforms of functions obtained from a table, and construct the inverse by inspection.
Region of convergence
If f is a locally integrable function (or more generally a Borel measure locally of bounded variation), then the Laplace transform F(s) of f converges provided that the limit
exists. The Laplace transform converges absolutely if the integral
exists (as a proper Lebesgue integral). The Laplace transform is usually understood as conditionally convergent, meaning that it converges in the former instead of the latter sense.
The set of values for which F(s) converges absolutely is either of the form Re(s) > a or else Re(s) ≥ a, where a is an extended real constant, −∞ ≤ a ≤ ∞. (This follows from the dominated convergence theorem.) The constant a is known as the abscissa of absolute convergence, and depends on the growth behavior of f(t).^{[10]} Analogously, the twosided transform converges absolutely in a strip of the form a < Re(s) < b, and possibly including the lines Re(s) = a or Re(s) = b.^{[11]} The subset of values of s for which the Laplace transform converges absolutely is called the region of absolute convergence or the domain of absolute convergence. In the twosided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence.
Similarly, the set of values for which F(s) converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at s = s_{0}, then it automatically converges for all s with Re(s) > Re(s_{0}). Therefore the region of convergence is a halfplane of the form Re(s) > a, possibly including some points of the boundary line Re(s) = a. In the region of convergence Re(s) > Re(s_{0}), the Laplace transform of f can be expressed by integrating by parts as the integral
That is, in the region of convergence F(s) can effectively be expressed as the absolutely convergent Laplace transform of some other function. In particular, it is analytic.
There are several Paley–Wiener theorems concerning the relationship between the decay properties of f and the properties of the Laplace transform within the region of convergence.
In engineering applications, a function corresponding to a linear timeinvariant (LTI) system is stable if every bounded input produces a bounded output. This is equivalent to the absolute convergence of the Laplace transform of the impulse response function in the region Re(s) ≥ 0. As a result, LTI systems are stable provided the poles of the Laplace transform of the impulse response function have negative real part.
Properties and theorems
The Laplace transform has a number of properties that make it useful for analyzing linear dynamical systems. The most significant advantage is that differentiation and integration become multiplication and division, respectively, by s (similarly to logarithms changing multiplication of numbers to addition of their logarithms). Because of this property, the Laplace variable s is also known as operator variable in the L domain: either derivative operator or (for s^{−1}) integration operator. The transform turns integral equations and differential equations to polynomial equations, which are much easier to solve. Once solved, use of the inverse Laplace transform reverts to the time domain.
Given the functions f(t) and g(t), and their respective Laplace transforms F(s) and G(s):
the following table is a list of properties of unilateral Laplace transform:^{[12]}
Time domain  's' domain  Comment  

Linearity  Can be proved using basic rules of integration.  
Frequency domain differentiation  F′ is the first derivative of F.  
Frequency domain differentiation  More general form, nth derivative of F(s).  
Differentiation  f is assumed to be a differentiable function, and its derivative is assumed to be of exponential type. This can then be obtained by integration by parts  
Second Differentiation  f is assumed twice differentiable and the second derivative to be of exponential type. Follows by applying the Differentiation property to f′(t).  
General Differentiation  f is assumed to be ntimes differentiable, with nth derivative of exponential type. Follows by mathematical induction.  
Frequency integration  This is deduced using the nature of frequency differentiation and conditional convergence.  
Integration  u(t) is the Heaviside step function. Note (u ∗ f)(t) is the convolution of u(t) and f(t).  
Time scaling  
Frequency shifting  
Time shifting  u(t) is the Heaviside step function  
Multiplication  the integration is done along the vertical line Re(σ) = c that lies entirely within the region of convergence of F.^{[13]}  
Convolution  
Complex conjugation  
Crosscorrelation  
Periodic Function  f(t) is a periodic function of period T so that f(t) = f(t + T), for all t ≥ 0. This is the result of the time shifting property and the geometric series. 
 , if all poles of sF(s) are in the left halfplane.
 The final value theorem is useful because it gives the longterm behaviour without having to perform partial fraction decompositions or other difficult algebra. If has poles in the righthand plane or on the imaginary axis, (e.g., if or ) the behaviour of this formula is undefined.
Relation to power series
The Laplace transform can be viewed as a continuous analogue of a power series. If a(n) is a discrete function of a positive integer n, then the power series associated to a(n) is the series
where x is a real variable (see Z transform). Replacing summation over n with integration over t, a continuous version of the power series becomes
where the discrete function a(n) is replaced by the continuous one f(t). (See Mellin transform below.) Changing the base of the power from x to e gives
For this to converge for, say, all bounded functions f, it is necessary to require that . Making the substitution −s = log x gives just the Laplace transform:
In other words, the Laplace transform is a continuous analog of a power series in which the discrete parameter n is replaced by the continuous parameter t, and x is replaced by e^{−s}.
Relation to moments
{{#invoke:mainmain}} The quantities
are the moments of the function f. If the first n moments of f converge absolutely, then by repeated differentiation under the integral, . This is of special significance in probability theory, where the moments of a random variable X are given by the expectation values . Then the relation holds:
Proof of the Laplace transform of a function's derivative
It is often convenient to use the differentiation property of the Laplace transform to find the transform of a function's derivative. This can be derived from the basic expression for a Laplace transform as follows:
yielding
and in the bilateral case,
The general result
where f^{(n)} denotes the nth derivative of f, can then be established with an inductive argument.
Evaluating improper integrals
Let , then (see the table above)
or
Letting s → 0, gives one the identity
provided that the interchange of limits can be justified. Even when the interchange cannot be justified the calculation can be suggestive. For example, proceeding formally one has
The validity of this identity can be proved by other means. It is an example of a Frullani integral.
Another example is Dirichlet integral.
Relationship to other transforms
Laplace–Stieltjes transform
The (unilateral) Laplace–Stieltjes transform of a function g : R → R is defined by the Lebesgue–Stieltjes integral
The function g is assumed to be of bounded variation. If g is the antiderivative of f:
then the Laplace–Stieltjes transform of g and the Laplace transform of f coincide. In general, the Laplace–Stieltjes transform is the Laplace transform of the Stieltjes measure associated to g. So in practice, the only distinction between the two transforms is that the Laplace transform is thought of as operating on the density function of the measure, whereas the Laplace–Stieltjes transform is thought of as operating on its cumulative distribution function.^{[14]}
Fourier transform
The continuous Fourier transform is equivalent to evaluating the bilateral Laplace transform with imaginary argument s = iω or s = 2πfi:^{[15]}
This definition of the Fourier transform requires a prefactor of 1/2π on the reverse Fourier transform. This relationship between the Laplace and Fourier transforms is often used to determine the frequency spectrum of a signal or dynamical system.
The above relation is valid as stated if and only if the region of convergence (ROC) of F(s) contains the imaginary axis, σ = 0. For example, the function f(t) = cos(ω_{0}t) has a Laplace transform F(s) = s/(s^{2} + ω_{0}^{2}) whose ROC is Re(s) > 0. As s = iω is a pole of F(s), substituting s = iω in F(s) does not yield the Fourier transform of f(t)u(t), which is proportional to the Dirac deltafunction δ(ωω_{0}).
However, a relation of the form
holds under much weaker conditions. For instance, this holds for the above example provided that the limit is understood as a weak limit of measures (see vague topology). General conditions relating the limit of the Laplace transform of a function on the boundary to the Fourier transform take the form of PaleyWiener theorems.
Mellin transform
The Mellin transform and its inverse are related to the twosided Laplace transform by a simple change of variables. If in the Mellin transform
we set θ = e^{−t} we get a twosided Laplace transform.
Ztransform
The unilateral or onesided Ztransform is simply the Laplace transform of an ideally sampled signal with the substitution of
 where T = 1/f_{s} is the sampling period (in units of time e.g., seconds) and f_{s} is the sampling rate (in samples per second or hertz)
Let
be a sampling impulse train (also called a Dirac comb) and
be the sampled representation of the continuoustime x(t)
The Laplace transform of the sampled signal is
This is the precise definition of the unilateral Ztransform of the discrete function x[n]
with the substitution of z → e^{sT}.
Comparing the last two equations, we find the relationship between the unilateral Ztransform and the Laplace transform of the sampled signal:
The similarity between the Z and Laplace transforms is expanded upon in the theory of time scale calculus.
Borel transform
The integral form of the Borel transform
is a special case of the Laplace transform for f an entire function of exponential type, meaning that
for some constants A and B. The generalized Borel transform allows a different weighting function to be used, rather than the exponential function, to transform functions not of exponential type. Nachbin's theorem gives necessary and sufficient conditions for the Borel transform to be well defined.
Fundamental relationships
Since an ordinary Laplace transform can be written as a special case of a twosided transform, and since the twosided transform can be written as the sum of two onesided transforms, the theory of the Laplace, Fourier, Mellin, and Ztransforms are at bottom the same subject. However, a different point of view and different characteristic problems are associated with each of these four major integral transforms.
Table of selected Laplace transforms
The following table provides Laplace transforms for many common functions of a single variable.^{[16]}^{[17]} For definitions and explanations, see the Explanatory Notes at the end of the table.
Because the Laplace transform is a linear operator:
 The Laplace transform of a sum is the sum of Laplace transforms of each term.
 The Laplace transform of a multiple of a function is that multiple times the Laplace transformation of that function.
Using this linearity, and various trigonometric, hyperbolic, and complex number (etc.) properties and/or identities, some Laplace transforms can be obtained from others quicker than by using the definition directly.
The unilateral Laplace transform takes as input a function whose time domain is the nonnegative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, u(t). The entries of the table that involve a time delay τ are required to be causal (meaning that τ > 0). A causal system is a system where the impulse response h(t) is zero for all time t prior to t = 0. In general, the region of convergence for causal systems is not the same as that of anticausal systems.
Function  Time domain 
Laplace sdomain 
Region of convergence  Reference 

unit impulse  all s  inspection  
delayed impulse  time shift of unit impulse  
unit step  Re(s) > 0  integrate unit impulse  
delayed unit step  Re(s) > 0  time shift of unit step  
ramp  Re(s) > 0  integrate unit impulse twice  
nth power ( for integer n ) 
Re(s) > 0 (n > −1) 
Integrate unit step n times  
qth power (for complex q) 
Re(s) > 0 Re(q) > −1 
^{[18]}^{[19]}  
nth root  Re(s) > 0  Set q = 1/n above.  
nth power with frequency shift  Re(s) > −α  Integrate unit step, apply frequency shift  
delayed nth power with frequency shift 
Re(s) > −α  Integrate unit step, apply frequency shift, apply time shift  
exponential decay  Re(s) > −α  Frequency shift of unit step  
twosided exponential decay (only for bilateral transform)  −α < Re(s) < α  Frequency shift of unit step  
exponential approach  Re(s) > 0  Unit step minus exponential decay  
sine  Re(s) > 0  Template:Harvnb  
cosine  Re(s) > 0  Template:Harvnb  
hyperbolic sine  Re(s) > α  Template:Harvnb  
hyperbolic cosine  Re(s) > α  Template:Harvnb  
exponentially decaying sine wave 
Re(s) > −α  Template:Harvnb  
exponentially decaying cosine wave 
Re(s) > −α  Template:Harvnb  
natural logarithm  Re(s) > 0  Template:Harvnb  
Bessel function of the first kind, of order n 
Re(s) > 0 (n > −1) 
Template:Harvnb  
Error function  Re(s) > 0  Template:Harvnb  
Explanatory notes:
Template:Colbegin Template:Colbreak

sDomain equivalent circuits and impedances
The Laplace transform is often used in circuit analysis, and simple conversions to the sDomain of circuit elements can be made. Circuit elements can be transformed into impedances, very similar to phasor impedances.
Here is a summary of equivalents:
Note that the resistor is exactly the same in the time domain and the sDomain. The sources are put in if there are initial conditions on the circuit elements. For example, if a capacitor has an initial voltage across it, or if the inductor has an initial current through it, the sources inserted in the sDomain account for that.
The equivalents for current and voltage sources are simply derived from the transformations in the table above.
Examples: How to apply the properties and theorems
The Laplace transform is used frequently in engineering and physics; the output of a linear time invariant system can be calculated by convolving its unit impulse response with the input signal. Performing this calculation in Laplace space turns the convolution into a multiplication; the latter being easier to solve because of its algebraic form. For more information, see control theory.
The Laplace transform can also be used to solve differential equations and is used extensively in electrical engineering. The Laplace transform reduces a linear differential equation to an algebraic equation, which can then be solved by the formal rules of algebra. The original differential equation can then be solved by applying the inverse Laplace transform. The English electrical engineer Oliver Heaviside first proposed a similar scheme, although without using the Laplace transform; and the resulting operational calculus is credited as the Heaviside calculus.
Example 1: Solving a differential equation
In nuclear physics, the following fundamental relationship governs radioactive decay: the number of radioactive atoms N in a sample of a radioactive isotope decays at a rate proportional to N. This leads to the first order linear differential equation
where λ is the decay constant. The Laplace transform can be used to solve this equation.
Rearranging the equation to one side, we have
Next, we take the Laplace transform of both sides of the equation:
where
and
Solving, we find
Finally, we take the inverse Laplace transform to find the general solution
which is indeed the correct form for radioactive decay.
Example 2: Deriving the complex impedance for a capacitor
In the theory of electrical circuits, the current flow in a capacitor is proportional to the capacitance and rate of change in the electrical potential (in SI units). Symbolically, this is expressed by the differential equation
where C is the capacitance (in farads) of the capacitor, i = i(t) is the electric current (in amperes) through the capacitor as a function of time, and v = v(t) is the voltage (in volts) across the terminals of the capacitor, also as a function of time.
Taking the Laplace transform of this equation, we obtain
where
and