Feynman is referring to the problem of showing that Quantum Electrodynamics is mathematically consistent, which will be tricky, because it almost certainly isn't. The methods of Feynman and Schwinger showed that the perturbation theory of quantum electrodynamics is fully consistent, order by order, but the theory itself was convincingly argued to be no good by Landau. Landau's argument is that any charge is screened by virtual electrons and positrons, so that the bare charge is bigger than the charge you see. But in order to get a finite renormalized charge, the bare charge has to go to infinity at some small, nonzero, distance scale. The argument is not rigorous, but an exactly analogous thing can be seen to happen numerically in the Ising model.
The methods of Kadanoff, Wilson, Fisher and others make it clear that there is a path to defining (bosonic, real-action) quantum field theory which is completely fine. This method identifies the continuum limit of quantum fields with the second order phase transition in the parameter space for a Euclidean lattice action. The continuum limit is defined by a second order phase transition, and all properties of the continuum limit are determined by tuning the parameters close enough to the transition.
This path, however, has not been made rigorous yet, and likely requires a few new mathematical ideas to prove that the limit exists. The new ideas are being formulated now, and there is some disagreement over what they are. What follows is my own opinion.
Free fields and measures
To define free field theory is trivial--- you pick every Fourier mode of the field to be a gaussian random variable with a variance equal to the inverse propagator. That's it. There's nothing more to it (For abelian gauge theory, you need to fix a gauge, but ok, and nonabelian gauge theory is never free).
Already here there is a problem. Mathematicians do not allow random picking algorithms to define a measure on infinite dimensional spaces, because if you are allowed to pick at random inside a set, every subset has measure. This contradicts the axiom of choice. Mathematicians want to have choice, so they do not allow natural measure theory, and there's no reason to go along with this kind of muddleheadedness on their part.
The principle: If you have a stochastic algorithm which works to pick a random element of a set S, then this algorithm suffices to define a measure on S, and every subset U of S has measure, equal to the probability that the algorithm picks an element of U.
This principle fails within standard set theoretical mathematics even for the most trivial random process: flipping coins to uniformly pick the digits of a random number between 0 and 1. The probability that this number lands in a "non measurable set" is ill defined. This is nonsense, of course, there are no "non-measurable sets", and the picking process proves this. But in order to make the argument rigorous, you have to reject the axiom of choice, which constructs them. The random picking argument is called "random forcing" within mathematics, and outside of random forcing models, probability is convoluted and unnatural, because you have to deal with non-measurable sets.
Interacting fields
For interacting fields, the required result is that there are only finitely many repelling local directions in the space of actions near the second order transition under rescaling (renormalization group transformations). This theorem is difficult to prove rigorously. The heuristic arguments can be turned into proofs only in certain cases.
The construction of interacting theories in the literature is mostly restricted to resummations of perturbation theory, and is useless. Resumming perturbation series is in no way going to work for theories with nontrivial vacua (at least not the way people do it).