# Determinants

## Introduction

We will first introduce the determinant in a way they are most often encountered and used in practice. We will state the main properties of determinant and see a few examples. After that we will introduce the concepts through alternating multilinear forms. This will allow us to prove the desired theorems and get a better understanding of the theory and connection between determinants and matrices.

## Determinant: definition, properties, and examples

Definition 1 (Recursive definition of determinant)

• Assume that $$2\times 2$$ matrix $$A$$ is of the following form: $$\displaystyle A=\left[\begin{array}{cc} a_{11}& a_{12}\\a_{21}&a_{22}\end{array}\right]$$. Then $$\det(A)$$ is defined to be the following number: $$\det(A)=a_{11}a_{22}-a_{12}a_{21}$$.

• Assume that $$A$$ is an $$n\times n$$ matrix and that $$A_{11}$$, $$A_{12}$$, $$\dots$$, $$A_{1n}$$ are $$(n-1)\times(n-1)$$ matrices obtained from $$A$$ in the following way: $$A_{1j}$$ is the matrix obtained by removing the first row and the $$j$$-th column from $$A$$. Then the determinant of $$A$$ is defined as: $\det(A)=a_{11}\cdot \det(A_{11})-a_{12}\cdot \det(A_{12})+a_{13}\cdot \det(A_{13})-a_{14}\cdot \det(A_{14})+\cdots+(-1)^{n+1} a_{1n}\cdot \det(A_{1n}).$

The determinant of the matrix $$\displaystyle A=\left[\begin{array}{cccc} a_{11}& a_{12}&\cdots&a_{1n}\\a_{21}&a_{22}&\cdots&a_{2n}\\ & &\vdots&\\ a_{n1}&a_{n2}&\cdots&a_{nn}\end{array}\right]$$ is also denote by $\det(A)=\det\left[\begin{array}{cccc} a_{11}& a_{12}&\cdots&a_{1n}\\a_{21}&a_{22}&\cdots&a_{2n}\\ & &\vdots&\\ a_{n1}&a_{n2}&\cdots&a_{nn}\end{array}\right]=\left|\begin{array}{cccc} a_{11}& a_{12}&\cdots&a_{1n}\\a_{21}&a_{22}&\cdots&a_{2n}\\ & &\vdots&\\ a_{n1}&a_{n2}&\cdots&a_{nn}\end{array}\right|.$

Example 1

Evaluate the determinant $$\displaystyle \left|\begin{array}{ccc} 2&3&-2\\ 5&3&-5\\ 4&1&7 \end{array}\right|$$.

Theorem 1 (Main properties of determinant)

Assume that $$A$$ is an $$n\times n$$ matrix whose entry on the position $$(i,j)$$ is equal to $$a_{ij}$$. The following properties hold for the determinant of $$A$$.

• (a) If $$A^T$$ is the transpose of $$A$$ (i.e. if $$(a_{ij})_{i,j=1}^n$$ are the entries of $$A$$, then the $$(i,j)$$ entry of $$A^T$$ is $$a_{ji}$$), then $$\det(A)=\det(A^T)$$.

• (b) If $$B$$ is the the matrix obtained from $$A$$ by multiplying by $$\alpha$$ each entry of the $$i$$-th row of $$A$$, then $$\det(B)=\alpha\det(A)$$. The same holds for columns.

• (c) If $$B$$ is the the matrix obtained from $$A$$ by exchanging the places of $$i$$-th and $$j$$-th row, then $$\det(B)=-\det(A)$$. The same holds for columns.

• (d) If two rows of the matrix $$A$$ are the same, then $$\det(A)=0$$. The same holds for columns

• (e) If $$B$$ is the the matrix obtained from $$A$$ by replacing the $$i$$-th row with $$(a_{i1}+\alpha a_{j1},a_{i2}+\alpha a_{j2},\dots, a_{in}+\alpha a_{jn})$$, for some $$\alpha\in\mathbb R$$ and some $$j\neq i$$, then $$\det(B)=\det(A)$$. The same holds for columns.

• (f) If $$A$$ and $$B$$ are two $$n\times n$$ matrices, then $$\det(A\cdot B)=\det(A)\cdot \det(B)$$.

Example 2

Evaluate the determinant $$\displaystyle \left|\begin{array}{ccccc} 5&10&0&0&0\\ 3&0&9&0&0\\ 1&1&0&0&3\\ 0&0&0&1&2\\ 2&4&6&8&10 \end{array}\right|$$.

Example 3 (Vandermonde determinant)

Evaluate the determinant $$\displaystyle \left|\begin{array}{ccccc} 1& \alpha_1&\alpha_1^2&\cdots&\alpha_1^{n-1}\\ 1& \alpha_2&\alpha_2^2&\cdots&\alpha_2^{n-1}\\ & & &\vdots& \\ 1& \alpha_n&\alpha_n^2&\cdots&\alpha_n^{n-1}\\ \end{array}\right|$$.

## Multilinear forms

Definition 2

A function $$L: \left(\mathbb R^k\right)^k\to\mathbb R$$ is called multilinear form if it satisfies: For every $$i\in\{1,2,\dots, k\}$$, every set of vectors $$\{v_1, \dots, v_{i-1}, v_i^{\prime},v_i^{\prime\prime},v_{i+1},\dots, v_k\}$$, and every two scalars $$\alpha^{\prime}$$ and $$\alpha^{\prime\prime}$$ the following holds: $L\left(v_1,\dots, \alpha^{\prime} v_i^{\prime}+\alpha^{\prime\prime} v_i^{\prime\prime},\dots, v_k\right)=\alpha^{\prime} L(v_1,\dots, v_{i-1},v_i^{\prime},v_{i+1},\dots, v_k)+\alpha^{\prime\prime} L(v_1,\dots, v_{i-1},v_i^{\prime\prime},v_{i+1},\dots, v_k).$

Definition 3

A multilinear form $$L: \left(\mathbb R^k\right)^k\to\mathbb R$$ is called alternating if for every $$1\leq i< j\leq k$$ and every $$k$$-vectors $$v_1$$, $$\dots$$, $$v_k$$ the following holds: $L(v_1, \dots,v_{i-1}, v_i, v_{i+1},\dots, v_{j-1},v_j,v_{j+1},\dots, v_k)=-L(v_1,\dots,v_{i-1}, v_j, v_{i+1},\dots, v_{j-1},v_i,v_{j+1},\dots, v_k).$

Example 4

Assume that $$\phi:\left(\mathbb R^3\right)^3\to \mathbb R$$ is an alternating multilinear form, and assume that $$e_1$$, $$e_2$$, and $$e_3$$ are three vectors such that $$\phi(e_1,e_2,e_3)=1$$. Determine $$\phi(e_1,e_3,e_1)$$, $$\phi(e_1,e_3,e_2)$$, and $$\phi(e_3,e_1,e_2)$$.

## Permutations

From the previous example we see that if we permute the vectors inside the alternating multilinear form the value of the form can either stay the same or change its sign. We will now find a characteristic property of the permutation that determines whether the form changes the sign.

Definition 4

A permutation $$\tau=(\tau_1, \dots, \tau_n)$$ of $$1,2,\dots, n$$ is called a transposition if there are two distinct indices $$i$$ and $$j$$ such that

• (i) $$i\neq j$$,

• (ii) $$\tau_i=j$$, $$\tau_j=i$$, and

• (iii) $$\tau_k=k$$ for all $$k\in\{1,2,\dots, n\}\setminus\{i,j\}$$.

Theorem 2

For every permutation $$\sigma$$ of the set $$\{1,2,\dots, n\}$$ there exist transpositions $$\tau_1, \dots, \tau_m$$ such that $$\sigma=\tau_1\circ \tau_2\circ \cdots \circ\tau_m$$.

Theorem 3

If $$\tau_1$$, $$\dots$$, $$\tau_k$$, $$\mu_1$$, $$\dots$$, $$\mu_l$$ are transpositions such that $$\sigma=\tau_1\circ \tau_2\circ \cdots \circ\tau_m= \mu_1\circ \mu_2\circ\cdots\circ \mu_l$$ than the number $$m-l$$ is even.

Example 5

Assume that $$\phi:\left(\mathbb R^k\right)^k\to \mathbb R$$ is an alternating multilinear form, and assume that $$e_1$$, $$\dots$$, $$e_k$$ are vectors from $$\mathbb R^k$$. Assume that $$\sigma=(\sigma_1, \dots, \sigma_k)$$ is a permutation of the numbers $$1$$, $$2$$, $$\dots$$, $$k$$. Denote by $$\mbox{sgn }(\sigma)$$ the sign of the permutation $$\sigma$$. Prove that $$\phi(e_{\sigma_1},\dots, e_{\sigma_k})=\mbox{sgn }(\sigma)\cdot \phi(e_1,\dots, e_k)$$.

## Determinant

Theorem 4

Assume that $$e_1$$, $$\dots$$, $$e_k$$ is a basis of $$\mathbb R^k$$. There exists a unique multilinear form $$L: \left(\mathbb R^k\right)^k\to\mathbb R$$ such that $$L(e_1,\dots, e_k)=1$$.

Definition 5

The multilinear form obtained in the previous theorem is called determinant.

Example 6

Let $$\displaystyle v_1=\left[\begin{array}{c} 1\\0\\2\end{array}\right]$$, $$\displaystyle v_2=\left[\begin{array}{c} 0\\1\\-1\end{array}\right]$$, and $$\displaystyle v_3=\left[\begin{array}{c} 2\\-1\\1\end{array}\right]$$. Evaluate the determinant $$\phi(v_1,v_2,v_3)$$.

In the future, instead of $$\displaystyle \phi\left(\left[\begin{array}{c} 1\\0\\2\end{array}\right], \left[\begin{array}{c} 0\\1\\-1\end{array}\right], \left[\begin{array}{c} 2\\-1\\1\end{array}\right]\right)$$, we will write $$\displaystyle \left|\begin{array}{ccc} 1&0&2\\0&1&-1\\2&-1&1\end{array}\right|$$ or $$\displaystyle \mbox{det } \left[\begin{array}{ccc} 1&0&2\\0&1&-1\\2&-1&1\end{array} \right]$$.

## Properties of determinant

Theorem 5

If $$A$$ is an $$n\times n$$ matrix, then $$\mbox{det }(A)=\mbox{det} (A^T)$$, where $$A$$ is the transpose of $$A$$ (i.e. if $$(a_{ij})_{i,j=1}^n$$ are the entries of $$A$$, then the $$(i,j)$$ entry of $$A^T$$ is $$a_{ji}$$)

Theorem 6

Let $$A$$ be an $$n\times n$$ matrix whose $$(i,j)$$ entry is equal to $$a_{ij}$$. Let us denote by $$\hat A_{ij}$$ the determinant of the $$(n-1)\times(n-1)$$ matrix obtained by removing the $$i$$-th row and the $$j$$-th column of $$A$$. Then for every $$i$$ and $$j$$ from $$\{1,2,\dots, n\}$$ we have $\det(A)=\sum_{k=1}^n a_{ik}\cdot (-1)^{i+k}\hat A_{ik}=\sum_{k=1}^n a_{kj}\cdot (-1)^{j+k} \hat A_{kj}.$

Theorem 7 (Inverse of a matrix)

Assume that $$A$$ is an $$n\times n$$ matrix whose entry on the position $$(i,j)$$ is equal to $$a_{ij}$$. Let $$B$$ be the matrix whose $$(i,j)$$ entry is equal to $$b_{ij}=(-1)^{i+j}\hat A_{ji}$$. Then $$AB=\det (A)I$$, where $$I$$ is the $$n\times n$$ identity matrix.

A consequence of the previous theorem is that if $$A$$ is invertible then $$A^{-1}=\frac1{\det(A)} B$$, where $$B$$ is the matrix whose $$(i,j)$$-entry is equal to $$(-1)^{i+j}\hat A_{ji}$$.

Theorem 8 (Multiplicative property)

If $$A$$ and $$B$$ are two $$n\times n$$ matrices, then $$\mbox{det }(AB)=\mbox{det }(A)\cdot \mbox{det }(B)$$.

2005-2018 IMOmath.com | imomath"at"gmail.com | Math rendered by MathJax