Shannon information is indeed closely related to thermodynamic entropy by some elaborate mathematical gobbledy-gook we needn't get into. Moreover, I think that's what's responsible for your confusion, because your comment clarification "Why does mass, velocity, etc boil down to 'information'" is barking up a somewhat different tree vis-a-vis the meaning of information, as follows.
Every physical system is mathematically characterized by what's very generally called a state, which evolves in time. When you ask "Why is the term 'information' used to describe a physical system", it's the state of the system that's what's being described. And that description typically consists of a collection of numbers specifying position, velocity, etc, just like your comment says. And then information simply refers to all those numbers. Certainly, if you were driving along in your car, and I referred to your location and speed as "information", you wouldn't argue with that common-sense usage of the word. And in physics, there's nothing much more esoteric or profound about it.
For your jet engine example, the system is the gasses, and information comprising its state is typically temperature, pressure, volume, and some other stuff. And in this case, it's the Navier-Stokes equations https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations which describe the behavior of such jet engine gasses. That is, the state always contains enough information so that the system's subsequent behavior can be calculated using those equations. I notice you've got lots of stackoverflow posts, so maybe just think of it as "programmed" rather than "calculated". Then the state is all the input necessary so that a program can be written to model the flow of those gasses. And we're just calling all that necessary input data "information".
So again, there's nothing esoteric or profound about it here. However, when you go beyond your comment "Why does mass, velocity, etc boil down to 'information'", and maybe start talking about thermodynamics (and statistical mechanics), or maybe about black holes and other stuff, then "information" typically takes on a more elaborate mathematical meaning beyond its everyday usage. And that's the tree we don't want to be barking up here.
Edit
--------
Re Jeff's "I'll dig into your other suggestions" comment, let me try to non-mathematically (and very briefly) suggest the gist of the overall information$\sim$entropy idea, especially since that wikipedia article gets pretty mathematical almost immediately. I'll instead use algorithmic complexity https://en.wikipedia.org/wiki/Kolmogorov_complexity which can (my opinion) be discussed less mathematically, and is related to Shannon entropy, e.g., https://www.quora.com/What-is-the-relationship-between-Kolmogorov-complexity-and-Shannon-entropy
First, entropy measures "disorder", i.e., a low-entropy system is very ordered, whereas a high-entropy system is very disordered. Consider a wall of neatly-arranged bricks (low entropy), versus a jumbled pile of bricks (high). The state (as per above discussion) of the bricks would be a complete description of each brick's position. For the wall, a very short description suffices since we can write one little formula that works for all the bricks. For the jumbled pile, however, there's no such brick-next-to-brick relationship, and we have to laboriously write out each individual brick's position.
So the state of the low-entropy ordered wall can be described much more concisely/compactly/shortly than the state of the high-entropy jumbled pile.
And that's related to algorithmic complexity as follows. Imagine a string of random characters (all from the lowercase a...z alphabet) "kduwygxostlqr..." and an equal-length easily-recognizable ordered string "abcdeabcdeabcde...". So which one has more "information"? Answer: use the zip program to compress both strings, and then the length of the resulting zip file measures the original string's information content. Clearly, the ordered string is more compressible, and hence contains less information.
So now the entropy$\sim$information relation should be obvious: the state that can be described more concisely is the one with lower entropy. And in both cases, it's the random brick pile and the random string that are high-entropy and high-information. Moreover, we can get lots more quantitative than "low" and "high". But that's where all the math illustrated in that wikipedia page comes into play.