Testing locally๏
Questions
How hard is it to set up a test suite for a first unit test?
Exercise๏
In this exercise we will make a simple function and use one of the language specific test frameworks to test it.
This is easy to use by almost any project and doesnโt rely on any other servers or services.
The downside is that you have to remember to run it yourself.
Local-1: Create a minimal example (15 min)
In this exercise, we will create a minimal example using the pytest, run the test, and show what happens when a test breaks.
Create a new directory and change into it:
$ mkdir local-testing-example $ cd local-testing-example
Create an example file and paste the following code into it
Create example.py
with content
def add(a, b):
return a + b
def test_add():
assert add(2, 3) == 5
assert add('space', 'ship') == 'spaceship'
This code contains one genuine function and a test function.
pytest
finds any functions beginning with test_
and treats them
as tests.
Create example.R
with content
if (!require(testthat)) install.packages(testthat)
add <- function(a, b) {
return(a + b)
}
test_that("Adding integers works", {
res <- add(2, 3)
# Test that the result has the correct value
expect_identical(res, 5)
# Test that the result is numeric
expect_true(is.numeric(res))
})
A test with testthat
is created by calling
test_that()
with a test name and code as arguments.
Create example.jl
with content
function myadd(a,b)
return a+b
end
using Test
@testset "myadd" begin
@test myadd(2,3) == 5
end
The package Test.jl
handles all testing.
A test(set) is added with @testset
and a test itself with @test
.
Run the test
$ pytest -v example.py
============================================================ test session starts =================================
platform linux -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 -- /home/user/pytest-example/venv/bin/python3
cachedir: .pytest_cache
rootdir: /home/user/pytest-example, inifile:
collected 1 item
example.py::test_add PASSED
========================================================= 1 passed in 0.01 seconds ===============================
Yay! The test passed!
Hint for participants trying this inside Spyder or IPython: try !pytest -v example.py
.
$ Rscript example.R
Loading required package: testthat
Test passed ๐
Yay! The test passed!
Note that the emoji is random and might be different for you.
$ julia example.jl
Test Summary: | Pass Total Time
myadd | 1 1 0.0s
Yay! The test passed!
Let us break the test!
Introduce a code change which breaks the code and check whether out test detects the change:
$ pytest -v example.py
============================================================ test session starts =================================
platform linux -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 -- /home/user/pytest-example/venv/bin/python3
cachedir: .pytest_cache
rootdir: /home/user/pytest-example, inifile:
collected 1 item
example.py::test_add FAILED
================================================================= FAILURES =======================================
_________________________________________________________________ test_add _______________________________________
def test_add():
> assert add(2, 3) == 5
E assert -1 == 5
E --1
E +5
example.py:6: AssertionError
========================================================= 1 failed in 0.05 seconds ==============
Notice how pytest is smart and includes context: lines that failed, values of the relevant variables.
$ Rscript example.R
โโ Failure: Adding integers works โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
`res` not identical to 5.
1/1 mismatches
[1] -1 - 5 == -6
Error: Test failed
Execution halted
testthat
tells us exactly which test failed and how
but does not include more context.
$ julia example.jl
myadd: Test Failed at /home/user/local-testing-example/example.jl:7
Expression: myadd(2, 3) == 5
Evaluated: -1 == 5
Stacktrace:
[1] macro expansion
@ ~/opt/julia-1.10.0/share/julia/stdlib/v1.10/Test/src/Test.jl:672 [inlined]
[2] macro expansion
@ ~/local-testing-example/example.jl:7 [inlined]
[3] macro expansion
@ ~/opt/julia-1.10.0/share/julia/stdlib/v1.10/Test/src/Test.jl:1577 [inlined]
[4] top-level scope
@ ~/local-testing-example/example.jl:7
Test Summary: | Fail Total Time
myadd | 1 1 0.6s
ERROR: LoadError: Some tests did not pass: 0 passed, 1 failed, 0 errored, 0 broken.
in expression starting at /home/user/local-testing-example/example.jl:6
Notice how Test.jl
is smart and includes context:
Lines that failed, evaluated and expected results.
(optional) Local-2: Create a test that considers numerical tolerance (10 min)
Letโs see an example where the test has to be more clever in order to avoid false negative.
In the above exercise we have compared integers. In this optional exercise we want to learn how to compare floating point numbers since they are more tricky (see also โWhat Every Programmer Should Know About Floating-Point Arithmeticโ).
The following test will fail and this might be surprising. Try it out:
def add(a, b):
return a + b
def test_add():
assert add(0.1, 0.2) == 0.3
add <- function(a, b){
return a + b
}
test_that("Adding floats works", {
expect_identical(add(0.1, 0.2),0.3)
})
function myadd(a,b)
return a + b
end
using Test
@testset "myadd" begin
@test myadd(0.1, 0.2) == 0.3
end
Your goal: find a more robust way to test this addition.
Solution: Local-2
One solution is to use pytest.approx:
from pytest import approx
def add(a, b):
return a + b
def test_add():
assert add(0.1, 0.2) == approx(0.3)
But maybe you didnโt know about pytest.approx: and did this instead:
def test_add():
result = add(0.1, 0.2)
assert abs(result - 0.3) < 1.0e-7
This is OK but the 1.0e-7
can be a bit arbitrary.
One solution is to use expect_equal which allows for roundoff errors:
test_that("Adding floats works with equal", {
res <- add(0.1, 0.2)
expect_equal(res,0.3)
expect_true(is.numeric(res))
})
But maybe you didnโt know about it and used the โless thanโ comparison of expect_lt instead:
test_that("Adding floats works with lt", {
res <- add(0.1, 0.2)
expect_lt(abs(res-0.3),1.0e-7)
expect_true(is.numeric(res))
})
This is OK but the 1.0e-7
can be a bit arbitrary.
One solution is to use \approx
:
@testset "Add floats with approx" begin
@test myadd(0.1,0.2) โ 0.3
#Variant with specifying a tolerance
@test myadd(0.1,0.2) โ 0.3 atol=1.0e-7
end
But maybe you didnโt know about \approx
and did this instead:
@test abs(myadd(0.1,0.2)-0.3) < 1.0e-7
This is OK but the 1.0e-7
can be a bit arbitrary.
Keypoints
Each test framework has its way of collecting and running all test functions, e.g. functions beginning with
test_
forpytest
.Python, Julia and C/C++ have better tooling for automated tests than Fortran and you can use those also for Fortran projects (via
iso_c_binding
).