When 'proper' scientists use temperature data measured to one decimal place in a series of calculations (say for example in NWP), do they use one decimal place in every step of the calculation or do they use more but give the final result to one decimal place? In my mind, if you only used 1 decimal place in each step, then every step in the calculation would introduce a larger margin of error in your final result do to cumulative rounding errors. Can anyone stop my brain from huring and confirm how this is actually done?