That would be a much simpler question to answer back in the days when temperature reading was an analogue process - reading the mercury level on a thermometer for example. For a digital gauge it is a bit more complicated.
Let's assume that the actual temperature is steady and all the variation is due to
various forms of noise in the measurement (for example - noise due to current variation in a resistive thermometer circuit),
noise in digital extraction (for example - resolution of DTA converter, 1/f noise due to acquisition time),
noise in data output (for example - round-off method in digital display).
Further assume that steps 1) and 2) result in a mean temperature measurement with some variance that is converted to the digital output display in stage 3). If the variance is fairly large compared with the 0.1 degree resolution, then the temperature is approximately given by the average reading +/- the calculated standard deviation. But if the variance is comparable in magnitude to the digital resolution then the oscillation of output temperature display doesn't describe the average temperature.
For example, if the 'internal' standard deviation was ~0.01 degrees and the digital output rounds to the nearest 0.1 degree then as the temperature shifted from 20.00 degrees to 20.10 degrees, the 'average' temperature calculated from the displayed temperatures will initially lag behind actual temperature and then jump upwards as measured temperature rises above 20.05 degrees. See simulated data below.
In general, the rounding process can result in a systematic error as much as half the displayed resolution (+/-0.05 degrees) as you suggest, but noise in the input signal could actually average that away (as your partner suggests). But unless you know the internals of the measurement system it is safer to accept the larger value as a possible systematic error.
