Another example from theoretical high-energy physics I've encountered: sometimes when physicists have some equation of motion for an arbitrary number N of particles with positions x_i, e.g. something of the form \frac{1}{N}\sum_i f(x_i) + \frac{1}{N^2}\sum_{ij} g(x_i, x_j) = 0, they wish to know what the solutions to this equation look like for large N. A technique they use is to replace the variables x_i with a probability measure \mu on the space of their possible values, which is supposed to represent the number of x_i's in a given region in the large N limit, and instead of solving the original equation they solve the analogous equation in \mu, e.g. \int f(x) \mathrm{d}\mu(x) + \int g(x, y) \mathrm{d}(\mu \times \mu) (x, y) = 0. In fact it's not hard to come up with a toy example where the original equation can be solved exactly for all N and the solutions "look like" a particular probability distribution in the large N limit, but that probability distribution fails to satisfy the corresponding equation, and for that reason I have some doubt that this method can be turned into something rigorous.
(I don't necessarily believe that rigour is a desirable property in physics, however...)
– Zen Harper Dec 09 '10 at 10:45