0

In Landau's Mechanics book there's a section in which he explains small oscillations in systems with $s \geq 1$ degrees of freedom. He writes the kinetic and potential energies as $$ T = \sum_{i, k} \frac{1}{2}a_{ik}(q_0)\dot{q}_i\dot{q}_k \hspace{1cm} U = \sum_{i, k} \frac{1}{2}k_{ik}x_ix_k $$ where $q_0$ is a stable equilibrium point, so that the matrix $K = (k_{ik})$ is positive definite, and $x = q - q_0$. Also, he puts $a_{ik}(q_0) = m_{ik}$, so that $$ T = \sum_{i, k} \frac{1}{2}m_{ik}\dot{q}_i\dot{q}_k $$ Using Lagrange's equations while looking for solutions of the form $x_k = A_k e^{i\omega t}, \; A_k, \omega \in \mathbb{C}$, he obtains $$ \sum_k (-\omega ^2m_{ik} + k_{ik})A_k = 0 $$ which can be rewritten in matrix form as $(-\omega ^2M + K)A = 0$. Both $M$ and $K$ are positive definite and hence invertible, so the last equation is equivalent to $(M^{-1}K - \omega^2 I)A = 0$. So, what Landau is looking for is the eigenvalues and eigenvectors of $M^{-1}K$. Then he says that, provided the eigenvalues are all different, the components $A_k$ of $A$ are proportional to the minors of the determinant of $(M^{-1}K - \omega^2I)$, with $\omega^2$ eigenvalue.

Why is this? Cramer's rule is useless because the matrix is not invertible.

My reasoning is as follows: put $C = \frac{M^{-1}K}{\omega^2}$. Then, we have $CA = A$. If we consider the determinant of the matrix C whose $i$th column is replaced with $A$, we get $$ D(C^1, \dots, C^{i-1}, A, C^{i+1}, \dots, C^s) = D(C^1, \dots, C^{i-1}, \sum_j A_jC^j, C^{i+1}, \dots, C^s) = A_i D(C) $$ and so $$ A_i = \frac{D(C^1, \dots, C^{i-1}, A, C^{i+1}, \dots, C^s)}{D(C)}= \frac{1}{D(C)}\sum_k M_{ik}A_k $$ where the last equality follows form Laplace's expansion of the determinant in the numerator and $M_{ik}$ are coefficients that are proportional to the minors of $C$.

How do I conclude that the coefficients $A_k$ are proportional to the minors?

EDIT: I'm looking for a proof of this fact.

Cosmas Zachos
  • 62,595
fresh
  • 129

1 Answers1

0

The components of the null vector are actually any column of the transpose cofactor matrix, so, then, the fabulous, wonderful adjugate matrix.

For a given matrix N, you are seeking the null vector, so $\det (N)=0$. Now the transpose of the cofactor matrix is $$ \operatorname{Adj}(N)=C^T, $$ where C, the cofactor matrix of N, has the properly sign-permuted minors in the corresponding entries. The property of the adjugate is that $$ N ~\operatorname{Adj}(N)= 1\!\!1 ~ \det (N). $$ But we assumed $\det (N)=0$, so the r.h.side of the above vanishes.

Any column of the adjugate matrix is a good null vector of N, which is why both the answers to the question you read linked by @Phoenix87 and Landau do not bother to specify which row they pick to compute the cofactors for. In this case, the rank of the Adjugate is just 1!

Cosmas Zachos
  • 62,595