1

Past a certain point of complexity, I get rather confused with physical units, so I am asking a physicist for help.

I have a code that represents temperature, with a resolution of [0.5 °C], whose value ranges from 0 to 255 as the temperature goes from -41 °C to the max.

The equation is:

Temperature code [units?] = 82 + 2 * Actual temperature [°C]

i.e.:

0 = -41.0 °C
1 - -40.5 °C
2 - -40.0 °C
...
81 = -0.5 °C
82 = 0 °C
83 - 0.5 °C
...

enter image description here

Are the units of the "Temperature Code" in the included graph one the following?

[0.5 - 82 °C]
[0.5 - 41 °C]
[-82 + 0.5 °C]
[-41 + 0.5 °C]
[0.5 °C - 82]
[0.5 °C - 41]
[0.5 + 82 °C]
[0.5 + 41 °C]
[82 + 0.5 °C]
[41 + 0.5 °C]
[0.5 °C + 82]
[0.5 °C + 41]

If not, what are the units?

EDIT: CLARIFICATION.

I know what the conversion is.

I know that the internal units of the code are "counts".

I know that, if there were no offset, the units of the code would be [0.5 °C].

What I don't know is how to include the effect of the offset in the units.

Why do I need to know?

  • To document the code correctly
  • For the sake of doing a strict units analysis in data conversions, to confirm that the code converts correctly; for that, each variable must be documented with the correct units
  • To help others understand the code

4 Answers4

4

255 values sounds like the value that can be contained in a single byte.

The person who created this "code" wanted to be able to represent "reasonable" temperatures with a single byte - they decided they wanted resolution better than 1°C, and they wanted to go down to "about as cold as you can get".

This means that the conversion is as follows:

From "C" to "code":

code = 2*(C+41)

from °C to code:

C = 0.5 * (code - 82)

This makes the maximum temperature that can be represented 86.5 °C - the value you get when code=255

Usually, instruments do not measure physical quantities directly in physical units: somewhere "below the covers" they measure something else - something you can translate to units. A spring balance measures displacement - you then convert displacement to Newtons.

In this case, you might have a thermistor with some circuitry and finally an ADC - measuring some voltage and expressing it as a single byte. You have a "calibration curve" which is the formula I gave above. If you actually measured the response of the device carefully you might find that the factors are slightly different.

In either case, the "units" are really contained in the calibration factors. So in my first equation, the number 2 has units "ADC/°C". And the unit on your vertical axis should just be "ADC units", "byte units", "device units", or whatever you feel comfortable calling it.

Floris
  • 118,905
2

The units are probably in degrees Celsius.

Whenever you add physical quantities together, they must have the same units. You can't add meters to kilograms (although you can multiply them or divide them). The result of such a thing would be nonsensical. However, you can add meters to meters.

In your case, you are multiplying degrees by 2. If '2' is unitless, this is simple. The result is in degrees Celsius. You are then adding a number to this quantity, so this quantity must also be in degrees Celsius. The result is, therefore, in degrees Celsius.

However, if the '2' is not unitless, you have a problem. It may be a single unit (kilograms) or a combination of multiple units (kilograms times seconds divided by meters). In this case, the answer will be in this strange combination of units multiplied by degrees Celsius.


The question has been edited since I wrote up my answer quite a few hours ago. At this point, my answer addresses properly the original question (to the best of my knowledge), but fails to address the current question, because it's a bit unclear to me what exactly the OP is asking. As it stands, it appears to be more of a programming question than a physics question. I'm also not a physicist (yet!), but I hope that doesn't invalidate my contribution.

HDE 226868
  • 10,821
  • 2
    For more on mathematical operations on units, see this post. – Kyle Kanos Nov 28 '14 at 17:19
  • No, of course not: the units are not degrees Celsius. If that were the case, the red line would be 45 degrees and go through the 0.0 origin. I am pleased for you that your post is being up-voted, but I will reserve my vote for a valid answer. – Davide Andrea Nov 28 '14 at 17:42
  • @DavideAndrea It makes complete sense. Since when is a line of the form $y=2x+b$ at a 45 degree angle, or going through the origin? – HDE 226868 Nov 28 '14 at 22:55
2

For the sake of doing a strict units analysis in data conversions, to confirm that the code converts correctly; for that, each variable must be documented with the correct units.

Then you're straight outa luck (or some cruder version of SOL). This unsigned eight bit integer contains a value that represents a temperature in a non-standard unit. The value might be

  • A custom representation dreamed up a long time ago to represent temperature when every byte of storage on a computer was precious, or

  • A custom representation dreamed up recently to represent temperature in a transmitted data stream where every single byte of transmitted data oftentimes still is precious, or

  • The output of a digital temperature sensor that has been digitized using an eight bit analog to digital converter (ADC) with a digital step of 0.5°C and a zero value of -41°C.

Whatever the case may be, your unsigned eight bit temperature value is not in any standard temperature scale. This means you cannot meet your organization's artificial requirement to document each variable with the correct units. Some variables just are not represented in a standard SI or customary unit. This problem with custom representational units is a rather common occurrence when one works at the low level of processing data from a sensor, from an archaic archive, or from a transmission stream.

Just because you have to deal with that custom representation of temperature does not mean that you have to inflict that pain on everyone else. One thing you can do is to hide that custom representation to the users of your code. Data encapsulation is a 40+ year old concept. Use it! Hide that non-standard representation from the users of your code.

David Hammen
  • 41,359
  • The value might be... -- It's #2 in your list.

    – Davide Andrea Nov 28 '14 at 21:04
  • In that case, I personally would (a) document the incoming value as having units of "counts", because that's exactly what you are getting; and (b) not expose that internal representation to the users of the data. Your users want a value that makes sense. So do that. The external interface should provide a value that has units that makes sense. That way the users of the data won't get in trouble. – David Hammen Nov 28 '14 at 23:44
  • document "counts". -- if I did that, then EVERY variable in the program (there are literally thousands of them) would have units of "counts", and that would be rather useless. -- > The external interface -- Indeed, the user sees °C, with a resolution of 0.5 °C.

    – Davide Andrea Nov 29 '14 at 00:46
  • @DavideAndrea - If you want the users of your system to see °C (or °F, or kelvins), provide them with a variable or an accessor function that gives them just that. Do not make them have to do a conversion. Expecting your users to perform a conversion is the path to making a spacecraft crash into Mars. – David Hammen Nov 29 '14 at 02:06
  • With regard to "counts", that's exactly what you have. Your formula of $\text{count}/2 - 41$ that supposedly converts counts to degrees Celsius is almost certainly incorrect. Along with random noise, sensors (and telemetered data from sensors) exhibit biases, scale factor errors, and non-linearities. Most of your users don't care. Give them a way to access data that is a step above the raw data. Some of your users care a lot about those sensor issues. You need to give them a way to access the raw data -- and to change the mechanism by which you convert counts to meaningful values. – David Hammen Nov 29 '14 at 02:19
1

While it is not a general answer, there is an engineering standard that holds that thermometer readings in non-absolute scales and temperature differences in the same scales have both different written notation and different spoken reading.

The prescribed convention is:

  • Thermometer readings get the degree symbol before the unit as in $$0 ^\circ\mathrm{C} = 32^\circ\mathrm{F}$$ which is read

    "zero degrees Celsius equal thrity-two degrees Fahrenheit".

  • Temperature differences reverse the order of the degree marker and the unit, so that discussing the relative size of the scale increments one writes $$1 \,\mathrm{C}^\circ = 9/5 \,\mathrm{F}^\circ$$ and reads

    "one Celsius degree equals nine-fifths of a Fahrenheit degree".

I've only ever seen this in textbooks in the physics world, but I'm told there are engineering disciplines where it is expected.