The history of thermometry highlights the basic challenges involved in recording and using accurate temperature measurement. Adopting a universal scale (whether Fahrenheit, Celsius, Kelvin, Rankine or other more obscure scales) makes the establishment of scientific standards possible, as well as the direct comparison of relative temperature data from place to place and instrument to instrument. It also hints at the importance of "reproducibility" in thermometry.
A thermometer measuring the ice point of water should read 0°C (or 32°F) consistently, not 0°C (32°F) one time, 1°C (34°F) the next and then -1°C (30°F) the next. That would make any universal scale adopted meaningless in comparing the relative temperatures of dissimilar materials and environments.
Reproducibility, accuracy and resolution, are the foundations upon which all good thermometer technology is built. Some expensive thermometers on the market today are quite precise and fairly accurate on occasion but are not reliably reproducible. That means that you may or may not have accurate temperature data depending upon the performance of the thermometer at the particular time you take your measurement.
One common challenge to the reproducibility of thermometers is a phenomenon known as hysteresis. With hysteresis, the physical properties of an instrument, like a thermometer probe, for example, are temporarily changed by the process of taking a measurement. Thermometers exhibiting hysteresis will display different temperatures in the same material, say, an ice bath, over a short period of time and are therefore not reproducible.
This problem is common with mechanical thermometers like bimetal dial thermometers but can also affect electronic thermometers like instant-read digitals. An extended resting period, to allow the physical properties of the instrument to return to normal, can sometimes restore accuracy, but often only temporarily. Or, as with dial thermometers, they may have to be re-calibrated regularly.