Rescuing Python Module Code From Cluttered Jupyter Notebooks

One of the ways I use Jupyter notebooks is to write stream-of-consciousness code.

Whilst the code doesn’t include formal tests (I never got into the swing of test-driven development, partly because I’m forever changing my mind about what I need a particular function to do!) the notebooks do contain a lot of implicit testing as I tried to build up a function (see Programming in Jupyter Notebooks, via the Heavy Metal Umlaut for an example of some of the ways I use notebooks to iterate the development of code either across several cells or within a single cell).

The resulting notebooks tend to be messy in a way that makes it hard to reuse code contained in them easily. In particular, there are lots of parameter setting cells and code fragment cells where I test specific things out, and then there are cells containing functions that pull together separate pieces to perform a particular task.

So for example, in the fragment below, there is a cell where I’m trying something out, a cell where I put that thing into a function, and a cell where I test the function:

My notebooks also tend to include lots of markdown cells where I try to explain what I want to achieve, or things I still need to do. Some of these usefully document completed functions, others are more note form that relate to the development of an idea or act as notes-to-self.

As the notebooks get more cluttered, it gets harder to use them to perform a particular task. I can’t load the notebook into another notebook as a module because as well as loading the functions in, all the directly run code cells will be loaded in and then executed.

Jupytext comes partly to the rescue here. As described in Exploring Jupytext – Creating Simple Python Modules Via a Notebook UI, we can add active-ipynb tags to a cell that instruct Jupytext where code cells should be executable:

In the case of the active-ipynb tag, if we generate a Python file from a notebook using Jupytext, the active-ipynb tagged code cells will be commented out. But that can still make for a quite a messy Python file.

Via Marc Wouts comes this alternative solution for using an nbconvert template to strip out code cells that are tagged active-ipynb; I’ve also tweaked the template to omit cell count numbers and only include markdown cells that are tagged docs.


echo """{%- extends 'python.tpl' -%}

{% block in_prompt %}
{% endblock in_prompt %}

{% block markdowncell scoped %}
{%- if \"docs\" in cell.metadata.tags -%}
{{ super() }}
{%- else -%}
{%- endif -%}
{% endblock markdowncell %}

{% block input_group -%}
{%- if \"active-ipynb\" in cell.metadata.tags  -%}
{%- else -%}
{{ super() }}
{%- endif -%}
{% endblock input_group %}""" > clean_py_file.tpl

Running nbconvert using this template over a notebook:

jupyter nbconvert "My Messy Notebook.ipynb" --to script --template clean_py_file.tpl

generates a My Messy Notebook.py file that includes code from cells not tagged as active-ipynb, along with commented out markdown from docs tagged markdown cells, that provides a much cleaner python module file.

With this workflow, I can revisit my old messy notebooks, tag the cells appropriately, and recover useful module code from them.

If I only ever generate (and never edit by hand) the module/Python files, then I can maintain the code from the messy notebook as long as I remember to generate the Python file from the notebook via the clean_py_file.tpl template. Ideally, this would be done via a Jupyter content manager hook so that whenever the notebook was saved, as per Jupytext paired files, the clean Python / module file would be automatically generated from it.

Just by the by, we can load in Python files that contain spaces in the filename as modules into another Python file or notebook using the formulation:

tsl = __import__('TSL Timing Screen Screenshot and Data Grabber')

and then call functions via tsl.myFunction() in the normal way. If running in a Jupyter notebook setting (which may be a notebook UI loaded from a .py file under Jupytext) where the notebook includes the magics:

%load_ext autoreload
%autoreload 2

then whenever a function from a loaded module file is called, the module (and any changes to it since it was last loaded) are reloaded.

PS thinks… it’d be quite handy to have a simple script that would autotag all notebook cells as active-ipynb; or perhaps just have another template that ignores all code cells not tagged with something like active-py or module-py. That would probably make tag gardening in old notebooks a bit easier…

Reporting in a Repeatable, Parameterised, Transparent Way

Earlier this week, I spent a day chatting to folk from the House of Commons Library as a part of a temporary day-a-week-or-so bit of work I’m doing with the Parliamentary Digital Service.

During one of the conversations on matters loosely geodata-related with Carl Baker, Carl mentioned an NHS Digital data set describing the number of people on a GP Practice list who live within a particular LSOA (Lower Super Output Area). There are possible GP practice closures on the Island at the moment, so I thought this might be an interesting dataset to play with in that respect.

Another thing Carl is involved with is producing a regularly updated briefing on Accident and Emergency Statistics. Excel and QGIS templates do much of the work in producing the updated documents, so much of the data wrangling side of the report generation is automated using those tools. Supporting regular updating of briefings, as well as answering specific, ad hoc questions from MPs, producing debate briefings and other current topic briefings, seems to be an important Library activity.

As I’ve been looking for opportunities to compare different automation routes using things like Jupyter notebooks and RMarkdown, I thought I’d have a play with the GP list/LSOA data, showing how we might be able to use each of those two routes to generate maps showing the geographical distribution, across LSOAs at least, for GP practices on the Isle of Wight. This demonstrates several things, including: data ingest; filtering according to practice codes accessed from another dataset; importing a geoJSON shapefile; generating a choropleth map using the shapefile matched to the GP list LSOA codes.

The first thing I tried was using a python/pandas Jupyter notebook to create a choropleth map for a particular practice using the folium library. This didn’t take long to do at all – I’ve previously built an NHS admin database that lets me find practice codes associated with a particular CCG, such as the Isle of Wight CCG, as well as a notebook that generates a choropleth over LSOA boundaries, so it was simply a case of copying and pasting old bits of code and adding in the new dataset.You can see a rendered example of the notebook here (download).

One thing you might notice from the rendered notebook is that I actually “widgetised” it, allowing users of the live notebook to select a particular practice and render the associated map.

Whilst I find the Jupyter notebooks to provide a really friendly and accommodating environment for pulling together a recipe such as this, the report generation workflows are arguably still somewhat behind the workflows supported by RStudio and in particular the knitr tools.

So what does an RStudio workflow have to offer? Using Rmarkdown (Rmd) we can combine text, code and code outputs in much the same way as we can in a Jupyter notebook, but with slightly more control over the presentation of the output.

__dropbox_parlidata_rdemos_-_rstudio

For example, from a single Rmd file we can knit an output HTML file that incorporates an interactive leaflet map, or a static PDF document.

It’s also possible to use a parameterised report generation workflow to generate separate reports for each practice. For example, applying this parameterised report generation script to a generic base template report will generate a set of PDF reports on a per practice basis for each practice on the Isle of Wight.

The bookdown package, which I haven’t played with yet, also looks promising for its ability to generate a single output document from a set of source documents. (I have a question in about the extent to which bookdown supports partially parameterised compound document creation).

Having started thinking about comparisons between Excel, Jupyter and RStudio workflows, possible next steps are:

  • to look for sensible ways of comparing the workflow associated with each,
  • the ramp-up skills required, and blockers (including cultural blockers (also administrative / organisational blockers, h/t @dasbarrett)) associated with getting started with new tools such as Jupyter or RStudio, and
  • the various ways in which each tool/workflow supports: transparency; maintainability; extendibility; correctness; reuse; integration with other tools; ease and speed of use.

It would also be interesting to explore how much time and effort would actually be involved in trying to port a legacy Excel report generating template to Rmd or ipynb, and what sorts of issue would be likely to arise, and what benefits Excel offers compared to Jupyter and RStudio workflows.