What programming language(s) are you normally working with (add an o for your answer):
What does testing your code mean for you? How do you test?
Material: https://coderefinery.github.io/testing/
Material: https://coderefinery.github.io/testing/motivation/ Your questions and comments here:
How to (approximately) measure code coverage? If you have a function, and a single test invoking that funciton, you compute it as +1 function tested?
After running pytest, i also got two directories, one called __pycache__ and another called .pytest_cache. Could you comment a bit on their use and purpose?
__pycache__ is a directory that contains files that python (not pytest in particular) produces during the execution, and might be reused later when python runs again, saving time (it can contain, e.g., bytecode-compiled python code and things like that). This is the general idea of a "cache".pytest_cache is a cache directory specific to pytestMaterial: https://coderefinery.github.io/testing/locally/
:::info
Instructions: https://coderefinery.github.io/testing/locally/#exercise
Progress report... are you (mark with an 'o' or any letter):
If you have problems and / or want to talk, you might join the zoom help room (information on how to join it has been shared in emails sent in the past) :::
In the excersise, when we change to "-", is there a way for pytest to test all asserts, i.e. not to stop at the first one that does not fulfill condition?
Can this testing heavily influence complexity of code, when dealing with e.g. fancy ML/DL models? Is it demanding in processing?
How the testing works? Does it simply runs all functions even when they aren't called?
test_ or end with _test, and call them for you. If you tell pytest to "test a directory", then it will look for all files that start or end with "test". In this sense, we may say that "registering" a test just requires you to name a function appropriately.How it differs from "try"?
Material: https://coderefinery.github.io/testing/continuous-integration/
Will be done as a walkthrough: https://coderefinery.github.io/testing/continuous-integration/#continuous-integration
Please post your questions and comments here: 7. Bonus question: why is the job on Johan's github actions failing? 8. Would it make sense to split the github action job between those parts who succeeded and the parts who failed? They do different things, right? - It can, but since creating the coverage report requires running the tests, you would be running some things multiple times.
:::info
:::
Material: https://coderefinery.github.io/testing/test-design/
::: info
Have a look at the list of exercises in this episode: https://coderefinery.github.io/testing/test-design/
Suggestion: start with the initial ones, which are simpler (and fundamental).
Progress report... are you (mark with an 'o' or any letter):
Questions, continued:
Design 3 solution throws PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\Users\M\AppData\Local\Temp\tmpd6relgrg'.
How would I write a test to make sure the correct error is raised when an incorrect input is passed?
try:
your_code()
assert False "Exception not raised"
except ExceptionYoureExpecting:
pass
except Exception:
assert False "Wrong exception raised"
But for example in pytest there's a context manager (with ....)
called "pytest.raises()", which does that job in a better way than reimplementing it yourself:::info
:::
:::info We hope you got an impression what automated testing can be useful for and how it can be implemented for different programming languages. The lesson materials will continue to be available for reference.
Join https://coderefinery.zulipchat.com/ to ask further questions and meet the instructors and rest of the team.
It will tie the whole workshop together and we will revisit almost all of the topics of the full workshop and apply them. - Great, looking forward.
:::
Today was (vote for all that apply):
One good thing about today:
One thing to improve for next time:
Any other feedback?