List of exercises

Full list

This is a list of all exercises and solutions in this lesson, mainly as a reference for helpers and instructors. This list is automatically generated from all of the other pages in the lesson. Any single teaching event will probably cover only a subset of these, depending on their interests.

Testing locally

In locally.md:

Local-1: Create a minimal example (15 min)

In this exercise, we will create a minimal example using the pytest, run the test, and show what happens when a test breaks.

  1. Create a new directory and change into it:

    $ mkdir local-testing-example
    $ cd local-testing-example
    
  2. Create an example file and paste the following code into it

Create example.py with content

def add(a, b):
    return a + b

def test_add():
    assert add(2, 3) == 5
    assert add('space', 'ship') == 'spaceship'

This code contains one genuine function and a test function. pytest finds any functions beginning with test_ and treats them as tests.

  1. Run the test

$ pytest -v example.py

============================================================ test session starts =================================
platform linux -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 -- /home/user/pytest-example/venv/bin/python3
cachedir: .pytest_cache
rootdir: /home/user/pytest-example, inifile:
collected 1 item

example.py::test_add PASSED

========================================================= 1 passed in 0.01 seconds ===============================

Yay! The test passed!

Hint for participants trying this inside Spyder or IPython: try !pytest -v example.py.

  1. Let us break the test!

Introduce a code change which breaks the code and check whether out test detects the change:

$ pytest -v example.py

============================================================ test session starts =================================
platform linux -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 -- /home/user/pytest-example/venv/bin/python3
cachedir: .pytest_cache
rootdir: /home/user/pytest-example, inifile:
collected 1 item

example.py::test_add FAILED

================================================================= FAILURES =======================================
_________________________________________________________________ test_add _______________________________________

    def test_add():
>       assert add(2, 3) == 5
E       assert -1 == 5
E         --1
E         +5

example.py:6: AssertionError
========================================================= 1 failed in 0.05 seconds ==============

Notice how pytest is smart and includes context: lines that failed, values of the relevant variables.

In locally.md:

(optional) Local-2: Create a test that considers numerical tolerance (10 min)

Let’s see an example where the test has to be more clever in order to avoid false negative.

In the above exercise we have compared integers. In this optional exercise we want to learn how to compare floating point numbers since they are more tricky (see also “What Every Programmer Should Know About Floating-Point Arithmetic”).

The following test will fail and this might be surprising. Try it out:

def add(a, b):
    return a + b

def test_add():
    assert add(0.1, 0.2) == 0.3

Your goal: find a more robust way to test this addition.

In locally.md:

Automated testing

In continuous-integration.md:

Exercise CI-1: Create and use a continuous integration workflow on GitHub or GitLab

In this exercise, we will:

  • A. Create and add code to a repository on GitHub/GitLab (or, alternatively, fork and clone an existing example repository)

  • B. Set up tests with GitHub Actions/ GitLab CI

  • C. Find a bug in our repository and open an issue to report it

  • D. Fix the bug on a bugfix branch and open a pull request (GitHub)/ merge request (GitLab)

  • E. Merge the pull/merge request and see how the issue is automatically closed.

  • F. Create a test to increase the code coverage of our tests.

Test design

In test-design.md:

Design-1: Design a test for a function that receives a number and returns a number

def factorial(n):
    """
    Computes the factorial of n.
    """
    if n < 0:
        raise ValueError('received negative input')
    result = 1
    for i in range(1, n + 1):
        result *= i
    return result

Discussion point: The factorial grows very rapidly. What happens if you pass a large number as argument to the function?

In test-design.md:

Design-2: Design a test for a function that receives two strings and returns a number

def count_word_occurrence_in_string(text, word):
    """
    Counts how often word appears in text.
    Example: if text is "one two one two three four"
             and word is "one", then this function returns 2
    """
    words = text.split()
    return words.count(word)

In test-design.md:

Design-3: Design a test for a function which reads a file and returns a number

def count_word_occurrence_in_file(file_name, word):
    """
    Counts how often word appears in file file_name.
    Example: if file contains "one two one two three four"
             and word is "one", then this function returns 2
    """
    count = 0
    with open(file_name, 'r') as f:
        for line in f:
            words = line.split()
            count += words.count(word)
    return count

In test-design.md:

Design-4: Design a test for a function with an external dependency

This one is not easy to test because the function has an external dependency.

def check_reactor_temperature(temperature_celsius):       """
    Checks whether temperature is above max_temperature
    and returns a status.
    """
    from reactor import max_temperature
    if temperature_celsius > max_temperature:
        status = 1
    else:
        status = 0
    return status

In test-design.md:

Design-5: Design a test for a method of a mutable class

class Pet:
    def __init__(self, name):
        self.name = name
        self.hunger = 0
    def go_for_a_walk(self):  # <-- how would you test this function?
        self.hunger += 1

In test-design.md:

Design-6: Experience test-driven development

Write a test before writing the function! You can decide yourself what your unwritten function should do, but as a suggestion it can be based on FizzBuzz - i.e. a function that:

  • takes an integer argument

  • for arguments that are multiples of three, returns “Fizz”

  • for arguments that are multiples of five, returns “Buzz”

  • for arguments that are multiples of both three and five, returns “FizzBuzz”

  • fails in case of non-integer arguments or integer arguments 0 or negative

  • otherwise returns the integer itself

When writing the tests, consider the different ways that the function could and should fail.

After you have written the tests, implement the function and run the tests until they pass.

In test-design.md:

Design-7: Write two different types of tests for randomness

Consider the code below which simulates playing Yahtzee by using random numbers. How would you go about testing it?

Try to write two types of tests:

  • a unit test for the roll_dice function. Since it uses random numbers, you will need to set the random seed, pre-calculate what sequence of dice throws you get with that seed, and use that in your test.

  • a test of the yahtzee function which considers the statistical probability of obtaining a “Yahtzee” (5 dice with the same value after three throws), which is around 4.6%. This test will be an integration test since it tests multiple functions including the random number generator itself.

import random
from collections import Counter


def roll_dice(num_dice):
    return [random.choice([1, 2, 3, 4, 5, 6]) for _ in range(num_dice)]


def yahtzee():
    """
    Play yahtzee with 5 6-sided dice and 3 throws.
    Collect as many of the same dice side as possible.
    Returns the number of same sides.
    """

    # first throw
    result = roll_dice(5)
    most_common_side, how_often = Counter(result).most_common(1)[0]

    # we keep the most common side
    target_side = most_common_side
    num_same_sides = how_often
    if num_same_sides == 5:
        return 5

    # second and third throw
    for _ in [2, 3]:
        throw = roll_dice(5 - num_same_sides)
        num_same_sides += Counter(throw)[target_side]
        if num_same_sides == 5:
            return 5

    return num_same_sides


if __name__ == "__main__":
    num_games = 100

    winning_games = list(
        filter(
            lambda x: x == 5,
            [yahtzee() for _ in range(num_games)],
        )
    )

    print(f"out of the {num_games} games, {len(winning_games)} got a yahtzee!")

In test-design.md:

Design-8: Design (but not write) an end-to-end test for the uniq program

To have a tangible example, let us consider the uniq command. This command can read a file or an input stream and remove consecutive repetition. The program behind uniq has been written by somebody else, it probably contains some functions, but we will not look into it but regard it as “black box”.

If we have a file called repetitive-text.txt containing:

(all together now) all together now
(all together now) all together now
(all together now) all together now
(all together now) all together now
(all together now) all together now
another line
another line
another line
another line
intermission
more repetition
more repetition
more repetition
more repetition
more repetition
(all together now) all together now
(all together now) all together now

… then feeding this input file to uniq like this:

$ uniq < repetitive-text.txt

… will produce the following output with repetitions removed:

(all together now) all together now
another line
intermission
more repetition
(all together now) all together now

How would you write an end-to-end test for uniq?

In test-design.md:

Design-9: More end-to-end testing

  • Now imagine a code which reads numbers and produces some (floating point) numbers. How would you test that?

  • How would you test a code end-to-end which produces images?

In test-design.md:

Design-10: Create an actual end-to-end test

Often, you can include tests that run your whole workflow or program. For example, you might include sample data and check the output against what you expect. (including sample data is a great idea anyway, so this helps a lot!)

We’ll use the word-count example repository https://github.com/coderefinery/word-count.

As a reminder, you can run the script like this to get some output, which prints to standard output (the terminal):

$ python3 code/count.py data/abyss.txt

Your goal is to make a test that can run this and let you know if it’s successful or not. You could use Python, or you could use shell scripting. You can test if these two lines are in the output: the 4044 and and 2807.

Python hint: subprocess.check_output will run a command and return its output as a string.

Bash hint: COMMAND | grep "PATTERN" (“pipe to grep”) will be true if the pattern is in the command.

(Optional) Full-cycle collaborative workflow

In full-cycle-ci.md:

FullCI-1: Create and use a continuous integration workflow on GitHub or GitLab with pull requests and issues

This is an expanded version of the automated testing demonstration. The exercise is performed in a collaborative circle within the exercise group (breakout room).

The exercise takes 20-30 minutes.

In this exercise, everybody will:

A. Create a repository on GitHub/GitLab (everybody should use a different repository name for their repository) B. Commit code to the repository and set up tests with GitHub Actions/ GitLab CI C. Everybody will find a bug in their repository and open an issue in their repository D. Then each one will clone the repo of one of their exercise partners, fix the bug, and open a pull request (GitHub)/ merge request (GitLab) E. Everybody then merges their co-worker’s change

In full-cycle-ci.md:

(optional) FullCI-2: Add a license file to the previous exercise’s repository

In the Social coding and open software lesson we learn how important it is to add a LICENSE file.

Your goal:

  • You discover that your coworker’s repository does not have a LICENSE file.

  • Open an issue and suggest a LICENSE.

  • Then add a LICENSE via a pull/merge request, referencing the issue number.