Past a certain point of complexity, I get rather confused with physical units, so I am asking a physicist for help.
I have a code that represents temperature, with a resolution of [0.5 °C], whose value ranges from 0 to 255 as the temperature goes from -41 °C to the max.
The equation is:
Temperature code [units?] = 82 + 2 * Actual temperature [°C]
i.e.:
0 = -41.0 °C
1 - -40.5 °C
2 - -40.0 °C
...
81 = -0.5 °C
82 = 0 °C
83 - 0.5 °C
...
Are the units of the "Temperature Code" in the included graph one the following?
[0.5 - 82 °C]
[0.5 - 41 °C]
[-82 + 0.5 °C]
[-41 + 0.5 °C]
[0.5 °C - 82]
[0.5 °C - 41]
[0.5 + 82 °C]
[0.5 + 41 °C]
[82 + 0.5 °C]
[41 + 0.5 °C]
[0.5 °C + 82]
[0.5 °C + 41]
If not, what are the units?
EDIT: CLARIFICATION.
I know what the conversion is.
I know that the internal units of the code are "counts".
I know that, if there were no offset, the units of the code would be [0.5 °C].
What I don't know is how to include the effect of the offset in the units.
Why do I need to know?
- To document the code correctly
- For the sake of doing a strict units analysis in data conversions, to confirm that the code converts correctly; for that, each variable must be documented with the correct units
- To help others understand the code