9

Given a general n-th degree linear ODE, what's the easiest way to prove that there are precisely n linearly independent solutions?

ahh
  • 99
  • 1
    This is not really appropriate for MO. You might have better luck with this question at math.stackexchange.com. – Qiaochu Yuan Nov 04 '10 at 19:26
  • 7
    I don't understand why this was closed. I'd bet if someone asked for the easiest proof of some standard result in another subject (e.g. Bezout's theorem for plane algebraic curves), there would be a host of responses. Mathematicians do interest themselves in questions like these, especially when teaching. I can't help but suspect that because the subject matter is differential equations, and not something nearer and dearer to the community's heart, the benefit of the doubt is not being given, and it is being assumed that this is some kind of homework thing and not a valid question. – anon Nov 05 '10 at 05:19
  • 6
    I agree with anon - I also don't think this deserves to be closed; it is a natural and important question, and well written. Call me stupid if you like, but it doesn't look obvious or trivial to me; in fact I don't even know any answer, I need to think about it. Note that ahh doesn't assume constant coefficients (and even that case is not completely trivial, needing exponentials of matrices and Jordan normal forms or similar). – Zen Harper Nov 05 '10 at 06:01
  • 1
    I think this result is proved in most undergraduate books on ODEs. Existence and uniqueness of a solution to an initial value problem for the ODE follows e.g. from the general Picard–Lindelöf theorem. And the Liouville formula implies that there are exactly $n$ linearly independent solutions (which correspond to any $n$ linearly independent initial data points). – Andrey Rekalo Nov 05 '10 at 09:16
  • I agree with closure: if anon and Zen make decent arguments that the answer may be interesting, it's only truly the case if the question is carefully formulated to make it compelling. There may be something to say from the teaching angle, but that's not what is being asked, and anyway the status of questions about teaching on MO is unclear at best. – Thierry Zell Nov 05 '10 at 11:52
  • 2
    OK, so it's not really so difficult, as Andrey's comments make clear; but it's not a stupid question, and it's not trivial, and it's not the crazed ravings of a demented nutter, and it's of interest to mathematicians (to me, anyway). And maybe there is another nice direct proof for this special case, which could be interesting. Remember that we don't have uniqueness even for very simple nonlinear equations, so it's conceivable that maybe a new direct proof could be generalised to certain types of nonlinear equations. – Zen Harper Nov 08 '10 at 05:44
  • Are you saying that a question has to be both difficult and highly interesting for it not to be closed? If you don't know the answer, how can you be sure of either of those things, let alone both? I didn't think about this for very long, and I'm annoyed with myself for not seeing the answer, but that's precisely why we shouldn't judge serious questions too harshly.

    Anyway, a large amount of research started from teaching, so it shouldn't be dismissed so quickly. I agree that this question has a fairly low order of nontriviality, but not sufficient to justify closure in my opinion.

    – Zen Harper Nov 08 '10 at 05:50
  • Maybe asking the same question but about the dimension of the solution space of nonlinear ODE's would make it more acceptable? – Michael Bächtold May 25 '11 at 12:32

2 Answers2

7

I don't know of an easy/easiest way to prove it for a general $n$th degree linear ODE, but it is worth pointing out that in the constant coefficient case you can get this from elementary linear algebra. The idea is that if $N$ is a positive integer and you have complex numbers $c_1, \dots, c_N$, then the solutions to the differential equation $$ \sum_{n=0}^N c_k y^{(k)} = 0 $$ (here $y^{(k)}$ denotes the $k$th derivative of $y$, interpreted as $y$ when $k=0$) are precisely the elements of the kernel of the operator $$ T = \sum_{n=0}^N c_k D^k $$ where $D$ is differentation, regarded as an operator on a vector space $V$ of functions (there is some freedom in what particular space you choose here; say the set of all infinitely differentiable functions $\mathbb{R} \to \mathbb{C}$). From the fundamental theorem of algebra, you know there are complex numbers $\omega, \omega_1, \dots, \omega_N$ with the property that the polynomial $\sum_{n=0}^N c_k z^k$ factors as $\omega \prod_{n=1}^n (z - \omega_n)$; it follows that your operator $T$ also factors, in the algebra of operators on $V$, as $$ T = \omega \prod_{n=1}^N (D - \omega_n I), $$ where $I$ denotes the identity operator on $V$.

The point is that each of the operators $D - \omega_n I$ has a one-dimensional kernel by basic calculus. (For any $k$, the function $f(t) = \exp(kt)$ is a solution to $y' = k y$, and if $g$ is any other, the quotient rule for derivatives shows that $(g/f)' = 0$. So by a standard argument involving the mean value theorem, $g/f$ is constant; so $\{f\}$ is a basis for $D - kI$.)

And it is a basic linear algebra fact that a product of $n$ operators with one-dimensional kernel, can have kernel of dimension at most $n$. (Follows from the more general assertion that if $S_1: V \to V$ and $S_2: V \to V$ are any operators, the dimension of the kernel of $S_1 S_2$ is at most the dimension of the kernel of $S_1$ plus the dimension of the kernel of $S_2$. This very easy consequence of the rank-nullity theorem--- and does not require $V$ to be finite dimensional.)

Why is the kernel of $T$ exactly $n$-dimensional? Well, just write down $n$ linearly independent elements in it, as they do in textbooks. (Of course, if you have the better sort of textbook, the entire argument just given is in there.)

For non-constant coefficients, factoring the corresponding differential operator is no longer the way you want to approach this. But for a lot of ODE, you can still get reasonably elementary theorems about the dimension of the kernel of the operator by applying some kind of transform (e.g. the Laplace transform) and getting in a position where it is just algebra again.

anon
  • 277
  • 1
    Your answer is not all that helpful, because if the OP didn't read to closely, they will not realize that just write down n linearly independent elements has a trick to it when numbers $\omega_i$ are not all distinct. – Thierry Zell Nov 05 '10 at 04:19
  • 10
    If the OP isn't reading that closely, what would a helpful answer look like? – anon Nov 05 '10 at 05:10
5

By the uniqueness property of the Cauchy problem, and linearity.

Pietro Majer
  • 56,550
  • 4
  • 116
  • 260