1

For each $k \in \mathbb R$, does there exist a non-empty open ball $B$ of $\mathbb R^{2 \times 2}$ such that for all $M \in B$ and Jordan decompositions $PJP^{-1}$ of $M$, the condition number $\kappa(P)$ is greater than $k$?

The condition number of $P$, denoted $\kappa(P)$, is equal to $\lVert P \rVert \lVert P^{-1} \rVert$, where we're using the operator norm.

If it's true then it suggests that the problem of having an ill-conditioned Jordan basis of $M$ cannot be solved by perturbing $M$ by an arbitrarily small distance $\varepsilon$. The perturbation might need to be by a minimum distance. We can say that it's stubbornly large.

Note that the term "Jordan decomposition" is ambiguous. I refer to the Jordan Normal Form in linear algebra. The problem is, I think the term Jordan Normal Form could be misunderstood to mean just the $J$ factor, and not the $P$ factor which is needed to describe such a decomposition. The term Jordan decomposition sometimes refers to a different decomposition which is not the JNF.

wlad
  • 4,823
  • Within the numerical analysis community, it's well known that the Schur decomposition $A = Q T Q^{-1}$ where $T$ is triangular and $Q$ unitary may often be used in place of the Jordan decomposition for a given computation, often giving much better numerical stability. For example, computing $f(A)$ using the Schur decomposition reduces to computing $f(T)$ for triangular $T$, leading to the "Schur-Parlett" algorithm. – James Apr 07 '23 at 22:45

1 Answers1

1

Yes. Let $\kappa > 1$ and consider the matrix $$ A = \begin{bmatrix} -\kappa^2+4\kappa-1 & \kappa^2-1 \\ -\kappa^2 + 1 & \kappa^2+4\kappa+1 \end{bmatrix} = P D P^{-1} $$ where $$ P = \alpha \begin{bmatrix} \kappa+1 & \beta (\kappa - 1) \\ \kappa-1 & \beta (\kappa+1) \end{bmatrix}, \quad D =\begin{bmatrix} 2 \kappa & 0 \\ 0 & 6 \kappa \end{bmatrix} $$ for any nonzero scalars $\alpha, \beta$. Note that the $P$ factor is unique in this case only up to an arbitrary column scaling (here represented by $\alpha$ and $\beta$), and the condition number of $P$ is actually ambiguous. Nonetheless, in this case the condition number (in the 2-norm) of $P$ is at least $\kappa$, attained when $|\beta| = 1$. The conclusion now follows by taking, say, $\kappa = 2 \max(k,1)$, and noting that the eigenvectors are continuous at $A$: for fixed $B$, $A + \epsilon B$ has eigenvectors $$ \begin{pmatrix} \kappa+1+O(\epsilon) \\ \kappa-1 \end{pmatrix}\text{ and }\begin{pmatrix} \kappa-1+O(\epsilon) \\ \kappa+1 \end{pmatrix}$$ as $\epsilon \to 0$.

Edit: Using the choice of eigenvectors above together with an arbitrary column scaling results in a matrix $P$ that is continuous as a function of the matrix $A$ (at $A$), and continuous in the choice of column scalings, and it follows that the minimum condition number of $P$ over all choices of column scalings (covering all possible $P$) is also continuous at $A$.

At $A$ (i.e., for $\epsilon = 0$), we can see that the minimum condition number of $P$ is $\kappa$ as follows. The squares of the singular values of $P$ are the eigenvalues of $P^*P$, given by $$ \sigma^2 = |\alpha|^2 (\kappa^2 + 1) (a \pm \sqrt b) $$ where $$ a = 1+|\beta|^2, \quad b = \left(|\beta|^2 + \frac{\kappa^4-6\kappa^2+1}{(\kappa^2+1)^2}\right)^2 + 16 \kappa^2 \frac{(\kappa^2-1)^2}{(\kappa^2+1)^4} . $$ Note $b > 0$ and $a^2 - b = 16 |\beta|^2 \kappa^2 / (\kappa^2+1)^2 \ge 0$. The square of the condition number of $P$ is the ratio of squares of the singular values, $$ \kappa(P)^2 = \frac{a+\sqrt{b}}{a-\sqrt{b}} . $$ Letting $x = |\beta|^2$, the logarithmic derivative of $\kappa(P)^2$ is $$ \frac{1}{\kappa(P)^2} \frac{d}{dx} \kappa(P)^2 = \frac{1}{\sqrt{b}} \frac{a \frac{db}{dx} - 2b \frac{da}{dx}}{a^2-b} = \frac{1}{\sqrt{b}} \frac{x-1}{x} . $$ So, $\kappa(P)$ is decreasing as a function of $x = |\beta|^2$ for $x < 1$ and increasing for $x > 1$, and the minimum occurs at $x = 1$, i.e., $|\beta| = 1$, where $\kappa(P) = \kappa$.

James
  • 611
  • Thanks for giving a good answer to this unanswered question! This seems correct to me, but not fully explained. Let $\kappa_{opt}(P) := min_{\text{($D$ diagonal)}} \kappa(PD)$. If I understand correctly, the key points needed here are that (1) in your example, $\kappa_{opt}(P) = \kappa(P)$, and (2) $\kappa_{opt}(P)$ is continuous in $A$. However, (1) is just mentioned briefly without proof in your answer, and (2) is not stated. – Federico Poloni Apr 08 '23 at 11:04
  • Fair points. I have addressed both in an addendum. – James Apr 10 '23 at 03:05
  • In the first paragraph of your edit, you seem to be using the fact that if $f(\kappa,\beta)$ is continuous, then $g(\kappa) = \min_\beta f(\kappa,\beta)$ is continuous, but I do not think it's true (see e.g. here). – Federico Poloni Apr 10 '23 at 08:02
  • I agree this is a gap in my proof. In this case $\beta = \text{argmin}{\beta} f(\kappa, \beta)$ is continuous, which makes $\min{\beta} f(\kappa, \beta)$ continuous, but a full proof of this first fact feels overboard to me. I'm guessing there's a simpler argument. – James May 10 '23 at 19:37