109

I have been told that the study of matrix determinants once comprised the bulk of linear algebra. Today, few textbooks spend more than a few pages to define it and use it to compute a matrix inverse. I am curious about why determinants were such a big deal. So here's my question:

a) What are examples of cool tricks you can use matrix determinants for? (Cramer's rule comes to mind, but I can't come up with much more.) What kind of amazing properties do matrix determinants have that made them such popular objects of study?

b) Why did the use of matrix determinants fall out of favor? Some back history would be very welcome.

Update: From the responses below, it seems appropriate to turn this question into a community wiki. I think it would be useful to generalize the original series of questions with:

c) What significance do matrix determinants have for other branches of mathematics? (For example, the geometric significance of the determinant as the signed volume of a parallelepiped.) What developments in mathematics have been inspired by/aided by the theory of matrix determinants?

d) For computational and theoretical applications that matrix determinants are no longer widely used for today, what have supplanted them?

arsmath
  • 6,720
Jiahao Chen
  • 1,870
  • 24
    Determinants were computed before there was any linear algebra to speak of... – Mariano Suárez-Álvarez Aug 18 '10 at 17:25
  • They're not all that convenient for making mundane linear computations by hand, are they, except in the 2x2 case? Yet they play a big part on the theoretical side, for example in understanding spaces of matrices in topology or algebraic geometry, ... What they are "really" about is alternating (antisymmetrized) multilinear algebra. I have the impression that up to a certain point they played a big role in teaching linear algebra, until somebody had the bright idea that they weren't that helpful at that level. – Tom Goodwillie Aug 18 '10 at 17:25
  • 20
    (a) Theoretically there is a lot you can do with determinants. The classical approach to invariant theory goes through Cappelli's formula and other determinant relations. Determinant ideals play a big role in the theory of modules over commutative rings. Resultants and discriminants (the oldest and main method of solving systems of nonlinear polynomial equations with arbitrary precision - at least in theory) are defined as particular determinants. (b) They have been replaced by more abstract notions. For instance, the determinant of a linear map has been replaced by the $n$-th exterior power – darij grinberg Aug 18 '10 at 17:31
  • (which is the same up to isomorphism). These $n$-th exterior powers have the advantages of being basis-independent and sometimes easier to use. – darij grinberg Aug 18 '10 at 17:31
  • 2
    I think determinants have fallen out of favour in first or second year undergraduate courses because they are difficult to teach. There are very few textbooks at this level that treat determinants in the way they should be treated, which is in terms of signed volumes. The applications of determinants to modern differential geometry are very plentiful. We should always tell that to students, since most of them do not enjoy determinants... – Spiro Karigiannis Aug 18 '10 at 17:31
  • 2
    Not so convenient for making machine computations, either, I guess. I should have mentioned the characteristic polynomial, with the resulting insight into eigenvalues and eigenvectors, as another "cool" thing about determinants. I think the basic answer is that they have gone out of style for numerical purposes because they're not the best way; but for theoretical purposes they will never go out of style. – Tom Goodwillie Aug 18 '10 at 17:33
  • The theoretical analysis of a number of convergence acceleration methods (e.g. Wynn epsilon or the Levin transforms) rests on being able to express them as ratios of determinants. – J. M. isn't a mathematician Aug 18 '10 at 17:42
  • 2
    @Spiro: why should they be treated as signed volumes? I have never ever thought of them as signed volumes, because I never deal with volumes... – Mariano Suárez-Álvarez Aug 18 '10 at 17:52
  • I agree with the previous commenters above that it's a sociological issue, $\textit{viz}$ elimination of content from standard courses, rather than a substantive one. (One can likewise ask: "why were epsilons and deltas/precise definitions/axioms of Euclidean geometry/etc once thought to be such a big deal?"). Even within linear algebra, determinants play a decisive role: for example, the theory of eigenvalues rests on the fact that $\lambda$ is an eigenvalue of $A$ if and only if $\det(A-\lambda I)=0.$ – Victor Protsak Aug 18 '10 at 18:16
  • 8
    Victor: You can develop the theory of eigenvalues without using the characteristic polynomial as the jumping-off point. Axler's Down with Determinants paper (which should have been called Down with Characteristic Polynomials) implicitly takes the k[x]-module approach. For example, every linear map $T$ on a complex vector space $V$ has an eigenvalue because you can factor an annihilator polynomial of T by the fundamental theorem of algebra; an annihilator polynomial is guaranteed to exist by the finite dimensionality of End(v). – Per Vognsen Aug 18 '10 at 19:43
  • 28
    Per, as the title of this paper clearly indicates, Axler is obsessed with eliminating a very useful tool, determinants, from linear algebra. But as any effort along these lines, it has a cost: if you need to compute the eigenvalues of a matrix, say

    $$\begin{pmatrix}1 & 2\\ 3 & 4\end{pmatrix}, $$

    it's little consolation that the minimal polynomial exists by an abstract argument! To me, this doctrinal approach appears just as fruitless as the attempts to base real analysis on "constructible" numbers only (since countably many reals expressible in a finite way "should be sufficient").

    – Victor Protsak Aug 18 '10 at 20:24
  • 7
    Victor: Agreed that exterior algebra is a wonderful tool. One should elucidate its algebraic, geometric and combinatorial structure rather than try to banish its use. As for computing eigenvalues, the preferred tools in applications are Krylov subspace methods. Though you don't usually see it mentioned in books on numerical analysis, their structure is very much based on notions that fall out of this k[x]-module approach. – Per Vognsen Aug 18 '10 at 20:59
  • 6
    I think determinants are still such a big deal. They come up in a lot of seemingly unrelated subjects. – Zsbán Ambrus Jun 02 '13 at 10:34

19 Answers19

130

I don't think that determinants is an old fashion topic. But the attitude towards them has changed along decades. Fifty years ago, one insisted on their practical calculation, by bare hands of course. This way of teaching linear algebra has essentially disappeared. But the theoretical importance of deteminants is still very high, and they are usefull in almost every branch of Mathematics, and even in other sciences. Let me give a few instances where determinants are unavoidable.

  1. Change of variable in an integral. Isn't the Jacobian of a transformation a determinant?
  2. The Wronskian of solutions of a linear ODE is a determinant. It plays a central role in spectral theory (Hill's equation with periodic coefficients), and therefore in stability analysis of travelling waves in PDEs.
  3. A well-known proof of the simplicity of the Perron's eigenvalue of an irreducible non-negative matrix is a very nice use of the multilinearity of the determinant.
  4. The $n$th root of the determinant is a concave function over the $n\times n$ Hermitian positive definite matrices. This is at the basis of many development in modern analysis, via the Brunn-Minkowski inequality.
  5. In combinatorics, determinants and Pfaffians occur in formulas counting configurations of lines between sets of points in network. D. Knuth advocates that there are no determinants, but only Pfaffians.
  6. Of course, the eigenvalues of a matrix are the roots of a determinant, the characteristic polynomial. In control theory, the Routh-Hurwitz algorithm, which checks whether a system is stable or not, is based on the calculation of determinants.
  7. As mentioned by J.M., Slater determinants are used in quantum chemistry.
  8. Frobenius Theory provides an algorithm for classifying matrices $M\in M_n(k)$ up to conjugation. It consists in calculating all the minors of the matrix $XI_n-M\in M_n(k[X])$ (these are determinants, aren't they?), then the g.c.d. of the minors of size $k=1,\ldots,n$ for each $k$. This is the theory of similarity invariants, which are polynomials $p_1,\ldots,p_n$, with $p_j|p_{j+1}$ and $p_1\cdots p_n=P_M$, the characteristic polynomial. If one goes further by decomposing the $p_j$'s (but this is beyond any algorithm), one obtains the theory of elementary divisors.
  9. If $L$ is an algebraic extension of a field $K$, the norm of $a\in L$ is nothing but the determinant of the $K$-linear map $x\mapsto ax$. It is an element of $K$.
  10. Kronecker's principle characterizes the power series that are rational functions, in terms of determinants of Hankel matrices. This has several important applications. One is the proof by Dwork of Weil's conjecture that Zeta functions of algebraic curves are rational functions. Another one is Salem's theorem: if $\theta>1$ and $\lambda>0$ are real numbers, such that the distances of $\lambda\theta^n$ to ${\mathbb N}$ are square summable, then $\theta$ is an algebraic number of class $S$.
  11. Above all, invertible matrices are characterized by their determinant: it is an invertible scalar. This is true when the scalars belong to a unit commutative ring. Besides, the determinant is the unique morphism ${\bf GL}_n(A)\mapsto A^*$ ; it therefore plays the same role in the linear group as that played by the signature in the symmetric group $\frak S_n$.
  12. Powers of the determinant of $2\times2$ matrices appear in the definition of automorphic forms over the Poincaré half-plane.
  13. See also the answers to JBL's question, Wonderful applications of the Vandermonde determinant
  14. In algebraic geometry, most projective curves can be seen as the zero set of some determinantal equality $\det(xA+yB+zC)=0$. The theory was developped by Helton & Vinnikov. For instance, a hyperbolic polynomial in three variables can be written as $\det(xI_n+yH+zK)$ with $H,K$ Hermitian matrices; this was conjectured by P. Lax.
  15. The discriminant of a quadratic form is the determinant of its matrix, say in a given basis. There are two important situations. A) If the scalars form a field $k$, the discriminant is really a scalar modulo the squares of $k^\times$. It is an element of the classification of quadratic forms up to isomorphism. B) Gauss defines a composition rule of two binary forms (say $ax^2+bxy+cy^2$) with integer coefficients when they have the same discriminant. The classes of equivalent forms of given discriminant make an abelian group. In 2014, a Fields medal was awarded to Manjul Bhargawa for major advances in this area.
  16. In a real vector space, the orientation of a basis is the sign of its determinant.
  17. One of the most important PDE, the Monge-Ampère equation writes $\det D^2u=f$. It is central in optimal transport theory.
  18. Recently, I proved the following amazing result. Let $T:{\mathbb R}^d\rightarrow{\bf Sym}_d^+$ be periodic, according to some lattice. Assume that $T$ is row-wise divergence-free, that is $\sum_j\partial_jt_{ij}=0$ for every $i=1,\ldots,d$. Then $$\langle(\det T)^{\frac1{d-1}}\rangle\le\left(\det\langle T\rangle\right)^{\frac1{d-1}},$$ where $\langle\cdot\rangle$ denotes the average of a periodic function. With the exponent $\frac1d$ instead, this would be a consequence of Jensen inequality and point 4 above. The equality case occurs iff $T$ is the cofactor matrix of the Hessian of some convex function.
  19. The Gauss curvature of a hypersurface is the Jacobian determinant of the Gauss map (the map which to a point $x$ associates the unit normal to the hypersurface at $x$).

Of course, this list is not exhaustive (otherwise, it should be infinite). I do teach Matrix Theory, at Graduate level, and spend a while on determinant, even if I rarely compute an exact value.

Edit. The following letter by D. Perrin to J.-L. Dorier (1997) supports the importance of determinants in algebra and in teaching of algebra.

Denis Serre
  • 51,599
  • 5
    Great answer (and you didn't even touch infinite dimensions) – Piero D'Ancona Sep 22 '10 at 10:44
  • 8
    Another major area where determinants play a central role is that of determinantal point processes, which occur in many parts of probability, mathematical physics, and random matrix theory. – Terry Tao Dec 23 '10 at 01:05
  • 2
    I'd also append to the list: the Browder Degree in $\mathbb{R}^n$. At least for smooth mappings, we have the integral formula for the degree, the signed version of the change of variable formula in 1. – Pietro Majer Dec 06 '16 at 16:25
66

Dedekind raised the question of computing the determinant of the multiplication table of a finite group (regarding the group elements as commuting indeterminates). The abelian case was well-understood. Frobenius was a master of determinants and created group representation theory in order to answer Dedekind's question. Frobenius' approach to group representations was based on determinants. This is explained by T. Hawkins in Arch. History Exact Sci. 7 (1970/71), 142-170; 8 (1971/72), 243-287; 12 (1974), 217-243.

I should also point out that the evaluation of determinants is alive and well within combinatorics. Often a number or generating function can be expressed as a determinant. This is considered a "nice" answer because determinants can sometimes be evaluated explicitly, and in any event can be evaluated quickly and have many other useful properties. See for instance the work of Krattenthaler, especially http://www.mat.univie.ac.at/~kratt/artikel/detsurv.html and the sequel http://www.mat.univie.ac.at/~kratt/artikel/detcomp.html.

  • 1
    If anyone is interested in exploring Frobenius' results themselves, a nice version is written down as Exercise 3.33 of Fulton-Harris, "Representation Theory". – Tom Church Aug 19 '10 at 02:22
  • 3
    So why did Dedekind raise this question? – Guntram Dec 22 '10 at 08:39
  • 9
    Guntram: Dedekind raised the question because in the case of abelian groups the answer was quite simple, in terms of characters of the group (homs from the group to the unit circle). So it's natural to ask what happens for nonabelian groups. He checked at least two examples (the groups S_3 and Q_8) and found certain patterns suggesting hypercomplex numbers might be relevant. For more details, read the paper by Hawkins which Stanley mentions in his answer. – KConrad Dec 23 '10 at 01:39
  • 3
    Also, pleasantly (in contrast with the identical character tables of nonisomorphic extraspecial $p$-groups of the same order), nonisomorphic groups have different group determinants in characteristic $0$. See E. Formanek, D. Silbey, "The group determinant determines the group," Proc. Amer. Math. Soc. 112 (1991), no. 3, 649-656. – Greg Marks Feb 23 '11 at 22:48
  • @KConrad, a lot of my understanding of the history comes from your Enseign. Math. paper, so I don't want to correct, but didn't Dedekind's interest come from computing discriminants of field extensions, which are squares of specialisations of the group determinant? – LSpice Jun 05 '17 at 18:06
  • @LSpice yes, but when I wrote my previous comment I didn't want to go back that far. The formula for discriminants of Galois extensions suggested to Dedekind the group determinant (ignore the square from discriminants). – KConrad Jun 05 '17 at 20:16
33

Very good question, I think. There is indeed a backlash against determinants, as evidenced for instance by Sheldon Axler's book Linear Algebra Done Right. To quote Axler from this link,

The novel approach used throughout the book takes great care to motivate concepts and simplify proofs. For example, the book presents, without having defined determinants, a clean proof that every linear operator on a finite-dimensional complex vector space (or on an odd-dimensional real vector space) has an eigenvalue.

Indeed. If you think that determinants are taught in order to invert matrices and compute eigenvalues, it becomes clear very soon that Gaussian elimination outperforms determinants in all but the smallest instances.

On the other hand, my undergraduate education spent a tremendous amount of time on determinants (or maybe it just felt that way). We built the theory (from $\dim \Lambda^n(K^n)=1$), proved some interesting theorems (row/column expansion, bloc-determinants, derivative of det(A(x))), but never used determinants to perform prosaic tasks. Instead, we spent a lot of time computing $n \times n$ determinants such as Vandermonde (regular and lacunary), Cauchy, circulant, Sylveser (for resultants)... and of course, giving a few different proofs of the Cayley-Hamilton theorem!

What's the moral of the story? I think it's twofold:

  1. Determinants are mostly a concern from a theoretical point of view. Computationally, the definition is awful. The most sensible thing to do to compute a determinant is to use Gaussian elimination, but if you're going to go through that bother, chances are that it's not really the determinant that you're after but rather something else that elimination will give you.

  2. Determinants are a fertile ground to get to grips with a lot of really fundamental mathematical tools that a student of abstract mathematics should know backwards and forward. If you do everything I described above, you must learn deep results about the structure of the symmetric group $S_n$ (and more generally about multilinear forms), 10 flavors of induction, practical uses of group morphisms (from $GL_n(K)$ to $K^\star$). And of course, the existence of determinants itself is crucial to the more theoretical developments such a student will encounter later on.

I've had pure math undergrads who had learned linear algebra from Axler's book. They knew how to compute a determinant. They had no idea why anyone would want to. So determinants are still a big deal, but just for the right audience: I'm perfectly fine with most scientist and engineers ignoring determinants beyond $3\times 3$. Mathematics students, especially those with a theoretical bent, can learn a lot from determinants.

Thierry Zell
  • 4,536
  • 4
    It should be pointed out, though, that some scientists do still use N x N determinants. I am most familiar with its use in quantum chemistry, where Slater determinants are used in Hartree-Fock theory to enforce the fermionic antisymmetry of many-electron wavefunctions by forming an explicitly antisymmetric product of one-electron wavefunctions. Presumably this use of determinants could be eliminated by judicious use of exterior products, but I don't know of a pedagogical presentation that doesn't use the determinant form. – Jiahao Chen Aug 18 '10 at 19:22
  • 9
    "The most sensible thing to do to compute a determinant is to use Gaussian elimination" - I beg to differ! To give a trivial example, the determinant of a lower triangular $n\times n$ matrix is the product of its diagonal entries, which can be computed in $O(n)$ operations, while the Gaussian elimination requires $O(n^2)$. A less trivial example: a symplectic matrix always has determinant 1. My point is that various properties, depending on the situation, may be profitably exploited to evaluate a determinant, e.g. $\det(QR)=\det(Q)\det(R)$ is standard. – Victor Protsak Aug 18 '10 at 19:29
  • 19
    I am also befuddled by pronouncements of a definition of a theoretical concept, such as determinant, to be "bad" when what is meant is "computationally awful". Clearly, computational complexity is a serious and relatively novel issue that complements, rather than invalidates, theoretical basics. For example, should we stop teaching the order of an element of a finite group simply because it's computationally infeasible to find the multiplicative order of a random $g\mod p$? To define the determinant by the Gaussian elimination algorithm would be worse than useless (namely, counterproductive). – Victor Protsak Aug 18 '10 at 19:39
  • 2
    @Victor: I never advocated not teaching people how to think. Your lower-triangular example is a strawman example (take the transpose). But the more important question is: why would people try to compute such a determinant? There are many people out there whose matrix needs can be perfectly filled by Gaussian elimination, because they don't care about the values of their determinants in the first place. As for symplectic matrices, the only people I know who use them are theorists, so need a strong math background anyway, so should know determinants. – Thierry Zell Aug 18 '10 at 20:47
  • @Jiahao: your example sounds still very theoretical, but it's a good point nonetheless, and I am sure that others could come up with very practical applications of determinants. Still, most of the algebra that ends up being used in real life is determinant-free. – Thierry Zell Aug 18 '10 at 20:57
  • 1
    Well I had always thought that the objection of "it takes so much work to do a determinant!" applied only to general dense matrices; to add another example to Victor's list, taking the determinant of a tridiagonal matrix via cofactor expansion is in fact equivalent to evaluating a certain three-term recurrence relation (which now links the theory of orthogonal polynomials and "Jacobi matrices" very tightly). Lastly... codes do exploit $\det(AB)=\det(A)\det(B)$; the point of Gaussian elimination is to factor your matrix A as $A=LU$; L unit lower triangular and U upper triangular... – J. M. isn't a mathematician Aug 18 '10 at 22:17
  • ...and since L is unit lower triangular (has all 1's on its diagonal) its determinant is 1, and thus the determinant of A is computed by multiplying together U's diagonal elements (or summing logarithms of the diagonal elements as the case may be). Now, all that neglected to take pivoting into account, which only serves to change the sign of the determinant. – J. M. isn't a mathematician Aug 18 '10 at 22:21
  • 28
    @Victor Protsak: Computing the determinant of an upper triangular matrix is $O(n)$, but checking that it's upper triangular is $O(n^2)$. – Daniel Litt Aug 18 '10 at 23:13
  • Daniel, if the matrix is $\textit{known to be}$ lower (or upper) triangular, checking that isn't necessary. That was my point. – Victor Protsak Aug 19 '10 at 02:06
  • @Thierry: I suppose it depends on what you mean as "theoretical" and "application". The use of the Slater determinant is key to deriving the actual Hartree-Fock equations, although in the practical calculations of Hartree-Fock theory the determinant is almost never constructed explicitly. – Jiahao Chen Aug 23 '10 at 13:41
  • 11
    Minor nitpick: Gaussian Elimination (in its textbook implementations) takes $\frac{2}{3}n^3$ operations, not $n^2$. – Suvrit Sep 22 '10 at 12:46
21

To confirm your remark that determinants were once the bulk of linear algebra, you should find in a university library the books by Thomas Muir on determinants. They're enormous! (You can also find them on Google Books, but physically holding them leaves a greater impression, especially if they are in your lap.) It reminds me of a comment of Serre: "Forgetting is a very healthy activity."

KConrad
  • 49,546
  • The library copy I had access to was tattered and boxed for preservation. I was hoping someone else had a better idea that I of what was in there. :) – Jiahao Chen Aug 18 '10 at 20:46
  • 1
    BTW, Michigan History of Science collection will print and bind it for you on demand, and it would cost less than xeroxing it at 10c/page. A library at my former institution took advantage of it to replace one of the volumes that was missing. – Victor Protsak Aug 18 '10 at 20:50
  • 2
    Serre makes that pronouncement in an interview, which one can find at http://sps.nus.edu.sg/~limchuwe/articles/serre.html – Mariano Suárez-Álvarez Aug 18 '10 at 20:52
  • Kenneth May made a rather delightful film, distributed by the MAA called "Who killed determinants?" (I think that it dates from around 1965) which goes over much of the history. It also has an interesting analysis of the number of papers published about determinants in each year and what seemed to influence the volume of publication (such as the publication of Muir's treatise). – Victor Miller Aug 19 '10 at 05:38
  • Googling "Who killed determinants?" turned up some interesting papers such as "Computing Eigenvalues and Eigenvectors without Determinants" (McWorter, 1998) and "A short survey of some recent applications of determinants" (Vein, 1982). – Jiahao Chen Aug 23 '10 at 13:42
  • 39
    Which Serre ? I don't remember having said that. But perhaps I just forgot... – Denis Serre Sep 22 '10 at 10:00
  • 2
    I once read a book (forgot exact title and author) written about 1920, about the history af determinants and matrices. It defined first the determinat, and then defined a matrix as " the matrix of a determinant"! The matrix was seen as just a practical way of writing the numbers of the determinant! – kjetil b halvorsen Jun 07 '12 at 01:57
  • 5
    I doubt that anyone is reading this comment thread any longer, but just in case: Muir's Treatise (revised by Metzler) was reprinted by Dover sometime in the 1960s. (I can't see the exact date, but the back cover boasts: "The paper is chemically the same as you would find in books priced $5.00 or more." So it must be pretty old!) There are surely many copies floating around on used book websites. –  Sep 25 '12 at 20:50
  • Further to @user5117's helpful note, the ISBN-13 of the Dover edition is 978-0486606705 – J.J. Green Oct 16 '16 at 12:00
18

Determinants are very important in almost every part of mathematics that I know. I wish I understood them better. But working directly with matrix determinants is very cumbersome, computation-intensive, and often unenlightening. So indeed they are not taught as much as they used to be. For pure mathematicians they have been replaced by exterior algebra, which is a much more elegant setting. And, as Spiro mentions, geometers work with signed volume forms, which turn into determinants when written with respect to a basis.

Even in computational mathematics, there is something called geometric algebra, which uses exterior, quaternionic and Clifford algebras to encode all the algorithms in an elegant and yet computationally efficient manner. I'm under the impression that video game programmers find this very useful.

Deane Yang
  • 26,941
14

An elementary use that was in the books on determinants (not linear algebra) I read as a student long ago: you can use them to write cute equations for objects in elementary (Euclidean or projective) geometry. For instance, the equation for the circle through three points in the plane is $$ \left| \begin{matrix} x^2+y^2 & x & y & 1 \cr a_1^2+a_2^2 & a_1 & a_2 & 1\cr b_1^2+b_2^2 & b_1 & b_2 & 1\cr c_1^2+c_2^2 & c_1 & c_2 & 1 \end{matrix} \right| = 0 $$ If the coefficient of $x^2+y^2$ is non zero then it is obviously the equation for a circle, and the three points $A, B, C$ obviously lie on the circle.
The coefficient of $x^2+y^2$ is $$ \left| \begin{matrix} a_1 & a_2 & 1\cr b_1 & b_2 & 1\cr c_1 & c_2 & 1 \end{matrix} \right| $$ and has to be nonzero. Since the equation for the line through $(b_1, b_2)$ and $(c_1, c_2)$ is $$ \left| \begin{matrix} x_1 & x_2 & 1\cr b_1 & b_2 & 1\cr c_1 & c_2 & 1 \end{matrix} \right|=0 $$ the coefficient of $x^2+y^2$ is nonzero if the three points do not lie on a straight line.

In the same vein, the linear homogeneous differential equation satisfied by $y_1(x)$ and $y_2(x)$ is $$ \left| \begin{matrix} y''(x) & y'(x) & y(x) \cr y_1''(x) & y_1'(x) & y_1(x) \cr y_2''(x) & y_2'(x) & y_2(x) \end{matrix} \right| = 0 $$

12

To answer (c):

I can't count how many times I've used the fact that for Vandermonde matrices, the determinant is what it is.

Pace Nielsen
  • 18,047
  • 4
  • 72
  • 133
  • 2
    Generally determinants of structured matrices have nice structure themselves, which is one explanation for their utility. – J. M. isn't a mathematician Aug 18 '10 at 23:16
  • 1
    Note that you can prove the nonsingularity of a Vandermonde matrix with different nodes very simply without speaking about determinants --- it's basically equivalent to the fact that a nonzero degree-$d$ polynomial has at most $d$ roots. – Federico Poloni Dec 06 '16 at 15:09
9

The Schur functions were defined using determinants. There is the classical definition as a determinant divided by the Vandermonde determinant. There is also the Jacobi-Trudi formula which expresses the Schur function as a determinant of complete homogeneous functions (or elementary functions). This came well before any theory of linear algebra.

  • There are quite a few determinantal formulas in enumerative combinatorics (counting lattice paths, etc) that originate with the det expression of the Schur functions. – Victor Protsak Aug 18 '10 at 20:26
8

I think the multivariate change-of-variable formula in integration (i.e. involving the determinant of the Jacobian) is still rather indispensable. The treatment I'm most familiar with is in Folland, where, as far as I recall, it is only used to construct integration in polar coordinates (and I think there was only one exercise, concerning a further extension).

One could perhaps say that the trick of computing the normalization to a Gaussian random variable, by way of passing through polar coordinates, uses determinants. EDIT: this fact also provides an immediate explanation for the presence of the determinant in the denominator of a multivariate Gaussian (and by positive semi-definiteness of the covariance, that the square root makes sense).

7

At least on the numerical front: the computation of determinants is prone to overflow for large enough matrices, which is why libraries like LINPACK made provisions for separately computing the mantissa and the exponent (see this for instance).

Cramer really isn't "cool" for solving linear equations either; as may have been mentioned before, for a general dense matrix, it takes O(n³) operations at best to compute a determinant (essentially through Gaussian elimination with some form of pivoting), and O(n!) at worst if you insist on cofactor expansion. Since you need to compute n+1 determinants to solve a linear system, it takes more effort to use Cramer than if you had used Gaussian elimination directly to solve your linear system.

Another application I have seen for determinants is as a check for the positive definiteness of a symmetric matrix through computing successive determinants of leading submatrices (this has applications in signal processing for instance); this too is slow compared to using e.g. Cholesky decomposition for checking if a matrix is positive definite.

On the matter of computing eigenvalues, determinants too were once used for the computation of the characteristic polynomial (for successive use with an appropriate rootfinding routine); nowadays with more stable modern algorithms like QR for eigenvalue computations, one no longer bothers with generating the characteristic polynomial.

7

Here is one cute analysis application. Consider a continuously differentiable function $A(x)$ on a real argument and taking values in the $N \times N$ matrices. Furthermore suppose that $\lambda(0)$ is a simple eigenvalue of $A(x)$, i.e. $dim(\ker( (A - \lambda(0))^N )) = 1$. Then there exists a small interval $I$ containing $0$ and a function $\lambda(x)$ such that $\lambda(x)$ such that $\lambda(x)$ is an eigenvalue of $A(x)$.

The proof is just applying the implicit function theorem to $$ f(x,E) = \det(A(x) - E), $$ since $\lambda(0)$ being simple implies that $\partial_E f(0,\lambda(0)) \neq 0$.

Furthermore, using the characterization of eigenvalues using the determinant, one can get some geometric information on how the curve $\lambda(x)$ looks.

Helge
  • 3,273
6

The area of an arbitrary triangle is most easily computed from its vertex coordinates as a 3x3 determinant (no square roots, trig, etc.). This generalizes to 3D (volume of a tetrahedron) and higher dimensions.

The cross product of two vectors a,b (essential tool in mechanics) is easily understood and memorized once one observes that its coordinates are 2x2 minor determinants of a 2x3 array formed by a and b. This too generalizes and provides a definition for cross product of n-1 vectors in n dimensions.

More generally, determinants combined with homogeneous coordinates are all one needs to derive elegant formulas for the basic operations of n-dimensional projective geometry, such as plane through 3 points, intersection of 3 planes. They also provide a coordinate representation (Pluecker coordinates) for lines in 3-space, or more generally for k-dim subspaces of n-dim prijective space.

The sign of the determinant of a matrix tells whether the rows are a left- or right-handed frame, and whether the corresponding linear map preserves or reverses orientations.

Determinants are more efficient than Gaussian elimination to compute the inverse of a 2x2 or 3x3 matrix, perhaps even 4x4. Unlike standard G.E. these formulas do not require division and may require fewer bits per number in intermediate results.

  • 1
    PS. The area of a polygon with n vertices is easily computed as the sum of n-2 determinants of 3x3 matrices. This too generalizes to higher dimensions: the volume of a polyhedron with n triangular faces is the sum of n-3 deetrminants of 4x4 matrices. Similarly for the measure of a d-dimensional polytope with n simplicial facets.

    Note that the polygon/polyhedron/polytope does not have to convex or even simply connected, and it is not necessary to partition it in non-overlapping triangles/tetrahedra/simplices.

    – Jorge Stolfi Sep 25 '12 at 20:37
6

Warning: I am basically just speculating, and not commenting with actual knowledge of the history.

I suspect that a lot of the nineteenth century work on determinants was motivated by invariant theory. Before Hilbert proved abstractly the finite generation of invariants, there was a small industry trying to explicitly compute invariants (for example the projective invariants of the action of $GL_2$ acting on binary forms) with the aim of giving a constructive proof that they were finitely generated.

These invariants often have a basis consisting of various sorts of determinantal expressions, and if you want to prove finite generation, you have to construct ways of taking certain determinantal expressions and writing them in terms of other determinantal expressions.

Within a decade or so of Hilbert's paper, people generally lost interest in constructive invariant theory. (After all, the abstract methods answered the most interesting questions and seemed much more likely than constructive methods to answer the most interesting remaining questions.)

I have wondered whether all the work on syzygies of determinantal varieties actually reduces to identities which were well known (to the right people) in the 19th century.

  • I think what really happened is that by the time of Gordan, they had exhausted all of the cases that could practically be calculated. Hilbert had answered the theoretical questions, and no progress was possible on the practical computations. (I don't think even computers have made the computations any more practical.) – arsmath Sep 23 '10 at 14:53
  • arsmath: There is nothing to support this unfortunately widely held belief. Nobody has run Gordan's 1868 algorithm on a computer! – Abdelmalek Abdesselam Sep 25 '10 at 13:25
6

The simple fact that the invertible square matrices of order $n$ are precisely those which have nonzero determinant has a real bunch of theoretical applications (in ring theory, topology, differential geometry, etc).

For example, if you consider the general linear group $GL(n, \mathbb{R})$ as a Lie group, to see that its associated Lie algebra is (isomorphic to) $gl(n, \mathbb{R})$, an essential step is to get the equivalence of tangent planes $T_e(GL(n,\mathbb{R})) \cong T_e(gl(n,\mathbb{R}))$ (where $e$ is the neutral element). But this is trivial if we have in mind that $GL(n,\mathbb{R}) = \{A\in gl(n,\mathbb{R}) : det(A) \neq 0 \}$ and that $det$ is continuous, because then $GL(n,\mathbb{R})$ is automatically an open set of $gl(n,\mathbb{R})$.

Jose Brox
  • 2,962
  • It is interesting that the 'right' explanation for open-ness depends on of what you think of finite-dimensional real $\mathrm{GL}(n)$ as a special case. If you think of it as a special case of an algebraic group, then the determinant is the way to see that it is (even Zariski) open. If you think of it as a special case of linear operators on a Banach space, then it is easier to find an explicit formula for the inverse of a small perturbation of an invertible transformation. – LSpice Jun 05 '17 at 22:12
5

For the reasons of reputation points, I am unable to comment on KConrad's post. I up-voted his response, however, because he does make a great point about the Muir book(s). This one in particular is of great interest if you have any thoughts as to wondering why determinants (and development of them) was/is important:

The Theory of Determinants in the Historical Order of Development

I've gone through parts and it's dry, but thorough.

Andrew
  • 97
5

1) The Chern-Weil theory of characteristic classes is built upon determinants of functions of curvature forms of vector bundles. 2) Feynman path integrals require determinants (but typically in infinite dimensions).

Vamsi
  • 3,323
5

From the realm of probability there are determinental and permanental processes. Terry Tao has a nice post about determinental processes here. For instance

"Examples of processes known to be determinantal include non-intersecting random walks, spectra of random matrix ensembles such as GUE, and zeroes of polynomials with gaussian coefficients."

I've not worked with these processes myself but I've heard enough seminar talks using them to say there is plenty of interest out there.

BSteinhurst
  • 1,352
3

I quickly scanned the answers and didn't find the following:

Square matrices of order $n$ can be considered to be embedded in $\Bbb R^{n \times n}$. The determinant is a continuous function of the entries of the matrix, so the singular matrices are $\det^{-1}(0)$ and are therefore a closed set in $\Bbb R^{n \times n}$. Thus, there will be non-singular matrices arbitrarily close to a singular matrix in any convenient metric on $\Bbb R^{n \times n}$. Of course, they will have pretty ugly condition numbers.

  • 3
    Of course it is true that singular matrices are approximable by non-singular ones, but it does not follow from the closedness of the singular locus. – LSpice Jun 05 '17 at 22:16
3

Just an example from applied statistics where determinants are unavoidable (because they are used to define the relevant concepts. In design of experiments, one wants to choose "optimal" values of the regressor variables $x_1, \dots x_p$ used to estimate a linear regression model $Y=X \beta + \epsilon$, where $X$ is the design matrix, an $n \times (p+1)$ matrix with the first column consisting of ones, and row $i$ being the covariates used for experimental run $i=1, \dots , n$.

D-optimality defines the optimal design to be the one maximizing the determinant $\det X^T X$, under the practical restriction that the regressor values x's must belong to some set where the experiment is feasible to do.

One way of justifying this is that the requirement is equivalent to minimizing the volume of confidence ellipsoids, calculated under the normal model. Mostly, numerical optimization is used to construct such designs. (although there are many theoretical papers solving toy models).