Before relying on a new experimental device, an experimental scientist always establishes its accuracy. A new detector is calibrated when the scientist observes its responses to known input signals. The results of this calibration are compared against the expected response.
Simulations and analysis using software should be held to the same standards!
Adapted from Testing and Continuous Integration with Python, created by Kathryn Huff
Further motivation for testing:
In software tests, expected results are compared with observed results in order to establish accuracy:
def fahrenheit_to_celsius(temp_f): """ Converts temperature in Fahrenheit to Celsius. """ temp_c = (temp_f - 32.0) * (5.0/9.0) return temp_c def test_fahrenheit_to_celsius(): temp_c = fahrenheit_to_celsius(temp_f=100.0) expected_result = 37.777777 assert abs(temp_c - expected_result) < 1.0e-6
Why are we not comparing directly all digits with the expected result?
Suiting up to modify untested code:
def fahrenheit_to_celsius(temp_f): temp_c = (temp_f - 32.0) * (5.0/9.0) return temp_c temp_c = fahrenheit_to_celsius(temp_f=100.0) print(temp_c)
f_to_c_offset = 32.0 f_to_c_factor = 0.555555555 temp_c = 0.0 def fahrenheit_to_celsius_bad(temp_f): global temp_c temp_c = (temp_f - f_to_c_offset) * f_to_c_factor fahrenheit_to_celsius_bad(temp_f=100.0) print(temp_c)
Would you trust a code …
- … when its tests do not pass?
- … if there are no tests at all?
- … if the tests are never run?
Tests make sure that expected functionality is preserved
Tests help both users and developers of your code
Tests help managing complexity