List of exercises

Full list

This is a list of all exercises and solutions in this lesson, mainly as a reference for helpers and instructors. This list is automatically generated from all of the other pages in the lesson. Any single teaching event will probably cover only a subset of these, depending on their interests.

Organizing your projects

In organizing-projects.md:

Recording computational steps

In workflow-management.md:

In workflow-management.md:

Workflow-1: Workflow solution using Snakemake

How Snakemake works

Somebody wrote a Snakemake solution in the Snakefile:

# a list of all the books we are analyzing
DATA = glob_wildcards('data/{book}.txt').book

rule all:
    input:
        expand('statistics/{book}.data', book=DATA),
        expand('plot/{book}.png', book=DATA)

# count words in one of our books
rule count_words:
    input:
        script='code/count.py',
        book='data/{file}.txt'
    output: 'statistics/{file}.data'
    shell: 'python {input.script} {input.book} > {output}'

# create a plot for each book
rule make_plot:
    input:
        script='code/plot.py',
        book='statistics/{file}.data'
    output: 'plot/{file}.png'
    shell: 'python {input.script} --data-file {input.book} --plot-file {output}'

We can see that Snakemake uses declarative style: Snakefiles contain rules that relate targets (output) to dependencies (input) and commands (shell).

Steps:

  1. Clone the example to your computer: $ git clone https://github.com/coderefinery/word-count.git

  2. Study the Snakefile. How does it know what to do first and what to do then?

  3. Try to run it. Since version 5.11 one needs to specify number of cores (or jobs) using -j, --jobs or --cores:

    $ snakemake --delete-all-output -j 1
    $ snakemake -j 1
    

    The --delete-all-output part makes sure that we remove all generated files before we start.

  4. Try running snakemake again and observe that and discuss why it refused to rerun all steps:

    $ snakemake -j 1
    
    Building DAG of jobs...
    Nothing to be done (all requested files are present and up to date).
    
  5. Make a tiny modification to the plot.py script and run $ snakemake -j 1 again and observe how it will only re-run the plot steps.

  6. Make a tiny modification to one of the books and run $ snakemake -j 1 again and observe how it only regenerates files for this book.

  7. Discuss possible advantages compared to a scripted solution.

  8. Question for R developers: Imagine you want to rewrite the two Python scripts and use R instead. Which lines in the Snakefile would you have to modify so that it uses your R code?

  9. If you make changes to the Snakefile, validate it using $ snakemake --lint.

Recording dependencies

In dependencies.md:

Dependencies-1: Time-capsule of dependencies

Situation: 5 students (A, B, C, D, E) wrote a code that depends on a couple of libraries. They uploaded their projects to GitHub. We now travel 3 years into the future and find their GitHub repositories and try to re-run their code before adapting it.

Answer in the collaborative document:

  • Which version do you expect to be easiest to re-run? Why?

  • What problems do you anticipate in each solution?

    A: You find a couple of library imports across the code but that’s it.

    B: The README file lists which libraries were used but does not mention any versions.

    C: You find a environment.yml file with:

    name: student-project
    channels:
      - conda-forge
    dependencies:
      - scipy
      - numpy
      - sympy
      - click
      - python
      - pip
      - pip:
        - git+https://github.com/someuser/someproject.git@master
        - git+https://github.com/anotheruser/anotherproject.git@master
    

    D: You find a environment.yml file with:

    name: student-project
    channels:
      - conda-forge
    dependencies:
      - scipy=1.3.1
      - numpy=1.16.4
      - sympy=1.4
      - click=7.0
      - python=3.8
      - pip
      - pip:
        - git+https://github.com/someuser/someproject.git@d7b2c7e
        - git+https://github.com/anotheruser/anotherproject.git@sometag
    

    E: You find a environment.yml file with:

    name: student-project
    channels:
      - conda-forge
    dependencies:
      - scipy=1.3.1
      - numpy=1.16.4
      - sympy=1.4
      - click=7.0
      - python=3.8
      - someproject=1.2.3
      - anotherproject=2.3.4
    

In dependencies.md:

In dependencies.md:

Dependencies-2: Create a time-capsule for the future

Now we will demo creating our own time-capsule and share it with the future world. If we asked you now which dependencies your project is using, what would you answer? How would you find out? And how would you communicate this information?

We start from an existing conda environment. Try this either with your own project or inside the “coderefinery” conda environment. For demonstration puprposes, you can also create an environment with:

$ conda env create -f myenv.yml

Where the file myenv.yml could have some python libraries with unspecified versions:

name: myenv
channels:
  - conda-forge
  - defaults
dependencies:
  - python=3.10
  - numpy
  - pandas
  - seaborn

After creating the environment we can activate it with

conda activate myenv

Now we can freeze the environment into a new YAML file with:

$ conda env export > environment.yml

Have a look at the generated file and discuss what you see.

In the future — or on a different computer — we can re-create this environment with:

$ conda env create -f environment.yml

What happens instead when you run the following command?

$ conda env export --from-history > environment_fromhistory.yml

More information: https://docs.conda.io/en/latest/

See also: https://github.com/mamba-org/mamba

Recording environments

In environments.md:

Containers-1: Time travel

Scenario: A researcher has written and published their research code which requires a number of libraries and system dependencies. They ran their code on a Linux computer (Ubuntu). One very nice thing they did was to publish also a container image with all dependencies included, as well as the definition file (below) to create the container image.

Now we travel 3 years into the future and want to reuse their work and adapt it for our data. The container registry where they uploaded the container image however no longer exists. But luckily we still have the definition file (below)! From this we should be able to create a new container image.

  • Can you anticipate problems using the definitions file 3 years after its creation? Which possible problems can you point out?

  • Discuss possible take-aways for creating more reusable containers.

 1Bootstrap: docker
 2From: ubuntu:latest
 3
 4%post
 5    # Set environment variables
 6    export VIRTUAL_ENV=/app/venv
 7
 8    # Install system dependencies and Python 3
 9    apt-get update && \
10    apt-get install -y --no-install-recommends \
11        gcc \
12        libgomp1 \
13        python3 \
14        python3-venv \
15        python3-distutils \
16        python3-pip && \
17    apt-get clean && \
18    rm -rf /var/lib/apt/lists/*
19
20    # Set up the virtual environment
21    python3 -m venv $VIRTUAL_ENV
22    . $VIRTUAL_ENV/bin/activate
23
24    # Install Python libraries
25    pip install --no-cache-dir --upgrade pip && \
26    pip install --no-cache-dir -r /app/requirements.txt
27
28%files
29    # Copy project files
30    ./requirements.txt /app/requirements.txt
31    ./app.py /app/app.py
32    # Copy data
33    /home/myself/data /app/data
34    # Workaround to fix dependency on fancylib
35    /home/myself/fancylib /usr/lib/fancylib
36
37%environment
38    # Set the environment variables
39    export LANG=C.UTF-8 LC_ALL=C.UTF-8
40    export VIRTUAL_ENV=/app/venv
41
42%runscript
43    # Activate the virtual environment
44    . $VIRTUAL_ENV/bin/activate
45    # Run the application
46    python /app/app.py

In environments.md:

In environments.md:

(optional) Containers-2: Installing the impossible.

When you are missing privileges for installing certain software tools, containers can come handy. Here we build a Singularity/Apptainer container for installing cowsay and lolcat Linux programs.

  1. Make sure you have apptainer installed:

    $ apptainer --version
    
  2. Make sure you set the apptainer cache and temporary folders.

    $ mkdir ./cache/
    $ mkdir ./temp/
    $ export APPTAINER_CACHEDIR="./cache/"
    $ export APPTAINER_TMPDIR="./temp/"
    
  3. Build the container from the following definition file above.

    apptainer build cowsay.sif cowsay.def
    
  4. Let’s test the container by entering into it with a shell terminal

    $ apptainer shell cowsay.sif
    
  5. We can verify the installation.

    $ cowsay "Hello world!"|lolcat
    

In environments.md:

(optional) Containers-3: Explore two really useful Docker images

You can try the below if you have Docker installed. If you have Singularity/Apptainer and not Docker, the goal of the exercise can be to run the Docker containers through Singularity/Apptainer.

  1. Run a specific version of Rstudio:

    $ docker run --rm -p 8787:8787 -e PASSWORD=yourpasswordhere rocker/rstudio
    

    Then open your browser to http://localhost:8787 with login rstudio and password “yourpasswordhere” used in the previous command.

    If you want to try an older version you can check the tags at https://hub.docker.com/r/rocker/rstudio/tags and run for example:

    $ docker run --rm -p 8787:8787 -e PASSWORD=yourpasswordhere rocker/rstudio:3.3
    
  2. Run a specific version of Anaconda3 from https://hub.docker.com/r/continuumio/anaconda3:

    $ docker run -i -t continuumio/anaconda3 /bin/bash