3

In mechanics, we obtain the equations of motion (Euler-Lagrange equations) via Hamilton's principle by considering stationary points of the action $$ S = \int_{t_i}^{t_f} L ~ dt $$ where we have $L=T-V$, the difference between kinetic and potential energy. The usual derivation sets the first variation to zero and integrates by parts, to yield the requirement $$ \delta S = \int_{t_i}^{t_f} \left[ \frac{\partial L}{\partial q} - \frac{d}{dt}\left( \frac{\partial L}{\partial \dot{q}} \right) \right] \delta q ~ dt + \frac{\partial L}{\partial \dot{q}}(t_f) ~ \delta q (t_f) - \frac{\partial L}{\partial \dot{q}}(t_i) ~ \delta q (t_i) = 0 $$ where $q$ denotes the generalised coordinates and $\dot{q}$ the corresponding velocities.

At this point, most textbook derivations eliminate the second and third terms by claiming $\delta q (t_i) = 0$ and $\delta q (t_f)=0$. The first of these is intuitive, because in practice we normally consider initial value problems in which the initial positions are known. But, a priori, we don't typically know $q (t_f)$ for an arbitrary time $t_f$, so why do we set $\delta q (t_f)=0$?

For some other variational principles, it is intuitive to assume the coordinates at both endpoints are known and fixed, for example Fermat's principle to work out the path of a light ray between two points. Is there an intuitive explanation of why the final coordinates are considered fixed when applying Hamilton's principle, or a derivation of the mechanical Euler-Lagrange equations without this assumption?

In considering the problem myself, I tried to obtain the same conditions in another way: if we instead take the final position $q (t_f)$ as free but with $t_f$ fixed, then, in addition to the Euler-Lagrange equation, we get the extra requirement for stationarity $$\frac{\partial L}{\partial \dot{q}}(t_f) = 0$$ but it seems that this does not hold in general. If we consider a harmonic oscillator, for example, this condition implies that the kinetic energy is minimised at the (arbitrary) fixed time $t_f$. I haven't yet considered the necessary conditions if we also consider $t_f$ as free, as I'm not totally sure of how to carry out the analysis without incorporating elements from optimal control theory (e.g. Pontryagin's principle or the HJB equation).

Qmechanic
  • 201,751

2 Answers2

2
  1. Usually in physics, we are given a problem, e.g., an initial value problem (IVP) or a boundary value problem (BVP)? These two kind of problems should not be conflated, cf. e.g. this, this & this Phys.SE posts.

  2. In dynamical (as opposed to static) problems, a stationary action principle or a Maupertuis principle/abbreviated action principle are sometimes possible for BVPs, but never for IVPs if we require locality$^1$.

  3. For the stationary action principle, there exist some mathematical freedom in the choice of consistent boundary conditions (BCs), cf. e.g. my Math.SE answer here. However, physics often dictates which BCs are relevant.

--

$^1$ There exist various non-local action formulations for IVPs, cf. e.g. this & this Phys.SE posts.

Qmechanic
  • 201,751
  • Many thanks! Points 1. and 3. are quite clear, and 1. in particular made me realise that we're really considering a BVP when using Hamilton's principle, yet the equations of motion are the same for IVPs of the same system so we can still find them using Euler-Lagrange.

    Do you have a reference or some justification for 2. ? My original thinking was that perhaps a variational formulation is possible for an IVP by allowing the final time to be free or infinite.

    – JayMFleming Apr 19 '18 at 12:42
  • Perhaps I'm misunderstanding 2., but if we consider an IVP for the first order system $\dot{x} = -x$ with $x(0) = x_0$, the solution is given by the minimiser of $\int_0^\infty \left( x^2 + \dot{x}^2 \right) ~ dt$ with $x(0) = x_0$ fixed, which we can verify by introducing an 'input' variable $u = \dot{x}$ and applying the standard linear quadratic regulator (LQR) result from control theory. Isn't this a (local) minimum principle for this IVP? – JayMFleming Apr 19 '18 at 13:35
0

If Fermat's principle is intuitive to you, Hamilton's principle is not much different. Both basically state that the system (or light) moves between two fixed points in such a way that the action (or time) is at a maximum/minimum.

I am not sure how intuitive this is. More natural for most people is probably to think of the time development of systems, i.e. you prepare the system in some state (or sent the light beam in a certain direction) and see what happens, i..e. where it ends up.

Hamiltion/Fermat's principles are neat because they are general (which is what physicists like).

Regarding your question, why do we set δq(tf)=0δq(tf)=0?, this is basically what Hamilton's principle states. In different words: You look at all possible paths from an initial point to a final point and the system takes the ideal path. You are not looking at all possible paths between all kind of points.

As a real world example, take as a starting point your home and as destination point your workplace. There are all kinds of paths you could take between them. However you (or "nature" if you wish) will decide on a single path, which is ideal. Depending on your priorities (=action functional), this might be the path that takes the shortest time, or the path that is the cheapest, etc.

Note that to find this ideal path you are not considering the paths from your home to the swimming pool, or paths between work and the airport, etc.

user1583209
  • 4,292
  • Thanks, your first paragraph (together with QMechanic's answer) made me realise that the Hamilton's principle really refers to a boundary value problem, and that the initial value problem I was considering as more 'intuitive' just happens to have the same equation of motion, i.e. the Euler-Lagrange equations. – JayMFleming Apr 19 '18 at 13:08