Sorry if this feels a bit squishy, but I'm wondering if there is any published work trying to give a fully formal definition of the notion of a useful theorem. I mean, in mathematics we all know that some theorems seem to be quite powerful while others seem almost virtually useless outside of a single application. Has anyone tried to use this idea to give a Kolmogorov complexity inspiredish definition of a useful theorem? Or demonstrated no such definition will work?
I'm thinking something along the lines of: A theorem $\phi$ is useful for an axiomitizatoin $A$ to the extent to which adding $\phi$ to $A$ (asymptotically) reduces the length of the proofs of the theorems of $A$. For instance, maybe look at how adding $\phi$ affects the number of characters needed to prove all theorems in $T_n$ and let $n$ go to infinity. Here $T_n$ is something like a the $< n$ symbol $A$ theorems or the $A$ theorems with proof length $<n$).
Anyway, I just provided that to give a flavor of what kind of thing I sorta mean. I'm just wondering if there is anything published vaguely in the area.
EDIT: When I say work, I mean that it validates the idea that some theorems are more useful than others in some natural sense rather than one which is totally dependent on our aesthetic judgements about which results we care about proving.