# Determinant

{{#invoke:Hatnote|hatnote}}

{{ safesubst:#invoke:Unsubst||$N=Merge |date=__DATE__ |$B= Template:MboxTemplate:DMCTemplate:Merge partner }}

In linear algebra, the **determinant** is a value associated with a square matrix. It can be computed from the entries of the matrix by a specific arithmetic expression, while other ways to determine its value exist as well. The determinant provides important information *1)* about a matrix of coefficients of a system of linear equations, or *2)* about a matrix that corresponds to a linear transformation of a vector space. In the *first* case the system has a unique solution exactly when the determinant is nonzero; when the determinant is zero there are either no solutions or many solutions. In the *second* case the transformation has an inverse operation exactly when the determinant is nonzero. A geometric interpretation can be given to the value of the determinant of a square matrix with real entries: the absolute value of the determinant gives the scale factor by which area or volume (or a higher-dimensional analogue) is multiplied under the associated linear transformation, while its sign indicates whether the transformation preserves orientation. Thus a 2 × 2 matrix with determinant −2, when applied to a region of the plane with finite area, will transform that region into one with twice the area, while reversing its orientation.

Determinants occur throughout mathematics. The use of determinants in calculus includes the Jacobian determinant in the substitution rule for integrals of functions of several variables. They are used to define the characteristic polynomial of a matrix that is an essential tool in eigenvalue problems in linear algebra. In some cases they are used just as a compact notation for expressions that would otherwise be unwieldy to write down.

The determinant of a matrix *A* is denoted det(*A*), det *A*, or |*A*|.^{[1]} In the case where the matrix entries are written out in full, the determinant is denoted by surrounding the matrix entries by vertical bars instead of the brackets or parentheses of the matrix. For instance, the determinant of the matrix

is written

and has the value

Although most often used for matrices whose entries are real or complex numbers, the definition of the determinant only involves addition, subtraction and multiplication, and so it can be defined for square matrices with entries taken from any commutative ring. Thus for instance the determinant of a matrix with integer coefficients will be an integer, and the matrix has an inverse with integer coefficients if and only if this determinant is 1 or −1 (these being the only invertible elements of the integers). For square matrices with entries in a non-commutative ring, for instance the quaternions, there is no unique definition for the determinant, and no definition that has all the usual properties of determinants over commutative rings.

## Definition

There are various ways to define the determinant of a square matrix *A*, i.e. one with the same number of rows and columns. Perhaps the most natural way is expressed in terms of the columns of the matrix. If we write an *n* × *n* matrix in terms of its column vectors

where the are vectors of size *n*, then the determinant of *A* is defined so that

where *b* and *c* are scalars, *v* is any vector of size *n* and *I* is the identity matrix of size *n*. These equations say that the determinant is a linear function of each column, that interchanging adjacent columns reverses the sign of the determinant, and that the determinant of the identity matrix is 1. These properties mean that the determinant is an alternating multilinear function of the columns that maps the identity matrix to the underlying unit scalar. These suffice to uniquely calculate the determinant of any square matrix. Provided the underlying scalars form a field (more generally, a commutative ring with unity), the definition below shows that such a function exists, and it can be shown to be unique.^{[2]}

Equivalently, the determinant can be expressed as a sum of products of entries of the matrix where each product has *n* terms and the coefficient of each product is −1 or 1 or 0 according to a given rule: it is a polynomial expression of the matrix entries. This expression grows rapidly with the size of the matrix (an *n* × *n* matrix contributes *n*! terms), so it will first be given explicitly for the case of 2 × 2 matrices and 3 × 3 matrices, followed by the rule for arbitrary size matrices, which subsumes these two cases.

Assume *A* is a square matrix with *n* rows and *n* columns, so that it can be written as

The entries can be numbers or expressions (as happens when the determinant is used to define a characteristic polynomial); the definition of the determinant depends only on the fact that they can be added and multiplied together in a commutative manner.

The determinant of *A* is denoted as det(*A*), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets:

### 2 × 2 matrices

The determinant of a 2 × 2 matrix is defined by

If the matrix entries are real numbers, the matrix *A* can be used to represent two linear maps: one that maps the standard basis vectors to the rows of *A*, and one that maps them to the columns of *A*. In either case, the images of the basis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the above matrix is the one with vertices at (0, 0), (*a*, *b*), (*a* + *c*, *b* + *d*), and (*c*, *d*), as shown in the accompanying diagram. The absolute value of *ad* − *bc* is the area of the parallelogram, and thus represents the scale factor by which areas are transformed by *A*. (The parallelogram formed by the columns of *A* is in general a different parallelogram, but since the determinant is symmetric with respect to rows and columns, the area will be the same.)

The absolute value of the determinant together with the sign becomes the *oriented area* of the parallelogram. The oriented area is the same as the usual area, except that it is negative when the angle from the first to the second vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for the identity matrix).

Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by *A*. When the determinant is equal to one, the linear mapping defined by the matrix is equi-areal and orientation-preserving.

The object known as the *bivector* is related to these ideas. In 2D, it can be interpreted as an *oriented plane segment* formed by imagining two vectors each with origin (0, 0), and coordinates (*a*, *b*) and (*c*, *d*). The bivector magnitude (denoted (*a*, *b*) ∧ (*c*, *d*)) is the *signed area*, which is also the determinant *ad* − *bc*.^{[3]}

### 3 × 3 matrices

The determinant of a 3 × 3 matrix is defined by

The rule of Sarrus is a mnemonic for the 3 × 3 matrix determinant: the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements, when the copies of the first two columns of the matrix are written beside it as in the illustration. This scheme for calculating the determinant of a 3 × 3 matrix does not carry over into higher dimensions.

*n* × *n* matrices

The determinant of a matrix of arbitrary size can be defined by the Leibniz formula or the Laplace formula.

The Leibniz formula for the determinant of an *n* × *n* matrix *A* is

Here the sum is computed over all permutations σ of the set {1, 2, ..., *n*}. A permutation is a function that reorders this set of integers. The value in the *i*th position after the reordering σ is denoted σ_{i}. For example, for *n* = 3, the original sequence 1, 2, 3 might be reordered to σ = [2, 3, 1], with σ_{1} = 2, σ_{2} = 3, and σ_{3} = 1. The set of all such permutations (also known as the symmetric group on *n* elements) is denoted S_{n}. For each permutation σ, sgn(σ) denotes the signature of σ, a value that is +1 whenever the reordering given by σ can be achieved by successively interchanging two entries an even number of times, and −1 whenever it can be achieved by an odd number of such interchanges.

In any of the summands, the term

is notation for the product of the entries at positions (*i*, σ_{i}), where *i* ranges from 1 to *n*:

For example, the determinant of a 3 × 3 matrix *A* (*n* = 3) is