Measurement (in classical sense) always involve uncertainty and has to be done repeatedly in order to estimate the value of the measured quantity and its variance:
$$
\overline{x} = \frac{1}{N}\sum_{i=1}^Nx_i,\\
\text{var}(x) = \overline{(x -\overline{x})^2} = \frac{1}{N-1}\sum_{i=1}^N(x_i-\overline{x})^2
$$
In classical situations we can perform measuring repeatedly on the same object. E.g., if one wants to measure the weight of an object, we weigh it several times and calculate the average - if we don't see it done in everyday life, it is because we are either using very precise instruments (or conversely do not bother about precision.) But repeated measurements are routinely done in any serious technological or scientific situations. In most situations measurements done on a single object should produce the same results as measurements on different but identical objects - in the latter case we call it sample averaging or ensemble averaging (the former is more statistical term, the latter is more common on physics.)
In quantum mechanics repeated measurements on the same object are impossible, since one measurement destroys the state of the object, and the subsequent measurement is not done on the same object (there are some caveats, but I stick here to the basic QM, that is of the immediate interest to the OP.) This is why the averaging in QM is always understood as an ensemble averaging. The QM then aims at predict the values of the averages that one would obtain in the experiment - and this mathematical predictions is what we call expectation values.
Related: Why can't the Uncertainty Principle be broken for individual measurements if it is a statistical law?
Remark: Another situation were one routinely talks about ensemble averaging is in statistical mechanics, where the ensemble averaging is opposed to averaging over the same system over time. See, e.g., Definition of Ensemble.