So typically you would write $1.427 \pm 0.150 \cdot 10^3 \text{(units)}$ or so; technically you would put in parentheses but of course that’s up to you. Actually you would probably want to round to $1.43 \pm 0.15 \cdot 10^3 \text{(units)}$, since that difference of 0.02 standard deviations is not going to help anyone in being pedantic.
Atomic/particle physicists in particular got a little tired of typing the $\pm$ symbol and started writing this instead as $1.427(150)\cdot 10^3$ or $1.43(15)\cdot 10^3$, the idea being that you write one digit for every digit that the error applies to. So for example if I go to the Wikipedia page for protons, I find out that their mass is listed as,
$$
m_\text{proton} = 938.27208816(29) \text{ MeV/c}^2
$$
This unit, "mega-electron-volts per $c^2$," refers to the amount of mass which if it were present half as matter, half as antimatter, and those were allowed to annihilate, it would generate the amount of energy needed to accelerate an electron through one million volts. The $c^2$ refers to this $E = m c^2$ mass-energy equivalence, the electron-volt is just a common unit of energy in particle physics (where people “just know” that the standard state temperature, 25°C, is 25.7 meV, electrons have a mass of 511 keV, and protons and neutrons have masses of 0.94 MeV).
The above expression is actually shorthand for saying that there is a 68% chance that the mass is between $$938.27208787 \text{ MeV/c}^2 < m_\text{proton} < 938.27208845 \text{ MeV/c}^2.$$So we subtract $29$ from the last two digits of the number, and we add it to the last two digits of the number, to get the actual $1\sigma$-confidence interval, or if we want more confidence we can double that number to get the $2\sigma$-confidence interval with 95% confidence, or triple it to get the $3\sigma$-confidence interval with 99.7% confidence.
The only other thing I will say here is that I find a lot of questionable treatments of uncertainty for undergraduates and, while there are exact mathematical formulas for a lot of operations to combine standard deviations, you can usually much-more-easily program your formula into an Excel spreadsheet, put your measured values into one column, use the Box-Muller formula to generate some randomized values for all the parameters that you know, and produce approximate results. Here is a Google Sheet showing the technique, if you wish to use it.
Funny anecdote: at one point my wife wanted to know how weight correlated to jean sizes at various companies, “what would I need to weigh to fit into size-X-jeans at such-and-so-place?” and I asked for some historical data from her about how much she weighed when she fit into various sizes, how much those sizes were listed as measured on the web sites, and so forth. Then I built a physical model and did some polynomial interpolation to solve this. So, the result of my calculation was some reasonable answer, but I dug a little deeper and used this technique to see how sensitive my technique was to the exact numbers that I was assuming, “maybe these retailers misreport their jean sizes by 0.25 inches or so, maybe my wife misremembers her weight by a kilo or two”, etc. etc.—and the result was that something like 35% of my simulations generated a result that was “obvious garbage.” (Like, she would have to weigh half of what a supermodel weighs for a reasonably-achievable waist size, or she would have to weigh some amount she already had weighed or so.) So I could tell my wife confidently, “Here is the number that it says, but I have to warn you that apparently your retailer sizing is so inconsistent that given the parameters that you have given me, I do not have a consistent way to model this, tiny little variations in the assumptions result in huge changes in this polynomial fit and huge changes in how it estimates weight. So this is a guess but I would say that I have absolutely no confidence in it, because we are comparing all of these different brands.”