Tagged: jupyter

Creating a Jupyter Bundler Extension to Download Zipped Notebook and HTML Files

In the first version of the TM351 VM, we had a simple toolbar extension that would download a zipped ipynb file, along with an HTML version of the notebook, so it could be uploaded and previewed in the OU Open Design Studio. (Yes, I know, it would have been much better to have an nbviewer handler as an ODS plugin, but the we don’t do that sort of tech innovation, apparently…)

Looking at updating the extension today for the latest version of Jupyter notebooks, I noticed the availability of custom bundler extensions that allow you to add additional tools to support notebook downloads and deployment (I’m not sure what deployment relates to?). Adding a new download option allows it to be added to the notebook Edit -> Download menu:

The extension is created as a python package:

# odszip/setup.py
from setuptools import setup

setup(name='odszip',
      version='0.0.1',
      description='Save Jupyter notebook and HTML in zip file with .nbk suffix',
      author='',
      author_email='',
      license='MIT',
      packages=['odszip'],
      zip_safe=False)
#odszip/odszip/download.py

# Copyright (c) The Open University, 2017
# Copyright (c) Jupyter Development Team.

# Distributed under the terms of the Modified BSD License.
# Based on: https://github.com/jupyter-incubator/dashboards_bundlers/

import os
import shutil
import tempfile

#THIS IS A REQUIRED FUNCTION
def _jupyter_bundlerextension_paths():
    '''API for notebook bundler installation on notebook 5.0+'''
    return [{
                'name': 'odszip_download',
                'label': 'ODSzip (.nbk)',
                'module_name': 'odszip.download',
                'group': 'download'
            }]


def make_download_bundle(abs_nb_path, staging_dir, tools):
	'''
	Assembles the notebook and resources it needs, returning the path to a
	zip file bundling the notebook and its requirements if there are any,
	the notebook's path otherwise.
	:param abs_nb_path: The path to the notebook
	:param staging_dir: Temporary work directory, created and removed by the caller
	'''
    
	# Clean up bundle dir if it exists
	shutil.rmtree(staging_dir, True)
	os.makedirs(staging_dir)
	
	# Get name of notebook from filename
	notebook_basename = os.path.basename(abs_nb_path)
	notebook_name = os.path.splitext(notebook_basename)[0]
	
	# Add the notebook
	shutil.copy2(abs_nb_path, os.path.join(staging_dir, notebook_basename))
	
	# Include HTML version of file
	cmd='jupyter nbconvert --to html "{abs_nb_path}" --output-dir "{staging_dir}"'.format(abs_nb_path=abs_nb_path,staging_dir=staging_dir)
	os.system(cmd)

	zip_file = shutil.make_archive(staging_dir, format='zip', root_dir=staging_dir, base_dir='.')
	return zip_file

#THIS IS A REQUIRED FUNCTION       
def bundle(handler, model):
	'''
	Downloads a notebook as an HTML file and zips it with the notebook
	'''
	
	# Based on https://github.com/jupyter-incubator/dashboards_bundlers
	
	abs_nb_path = os.path.join(
		handler.settings['contents_manager'].root_dir,
		model['path']
	)
		
	notebook_basename = os.path.basename(abs_nb_path)
	notebook_name = os.path.splitext(notebook_basename)[0]
	
	tmp_dir = tempfile.mkdtemp()

	output_dir = os.path.join(tmp_dir, notebook_name)
	bundle_path = make_download_bundle(abs_nb_path, output_dir, handler.tools)
		
	handler.set_header('Content-Disposition', 'attachment; filename="%s"' % (notebook_name + '.nbk'))
	
	handler.set_header('Content-Type', 'application/zip')
	
	with open(bundle_path, 'rb') as bundle_file:
		handler.write(bundle_file.read())

	handler.finish()


	# We read and send synchronously, so we can clean up safely after finish
	shutil.rmtree(tmp_dir, True)

We can then create the python package and install the extension, remmebering to restart the Jupyter server for the extension to take effect.

#Install the ODSzip extension package
pip3 install --upgrade --force-reinstall ./odszip

#Enable the ODSzip extension
jupyter bundlerextension enable --py odszip.download  --sys-prefix

Experimenting With Sankey Diagrams in R and Python

A couple of days ago, I spotted a post by Oli Hawkins on Visualising migration between the countries of the UK which linked to a Sankey diagram demo of Internal migration flows in the UK.

One of the things that interests me about the Jupyter and RStudio centred reproducible research ecosystems is their support for libraries that generate interactive HTML/javascript outputs (charts, maps, etc) from a computational data analysis context such as R, or python/pandas, so it was only natural (?!) that I though I should see how easy it would be to generate something similar from a code context.

In an R context, there are several libraries available that support the generation of Sankey diagrams, including googleVis (which wraps Google Chart tools), and a couple of packages that wrap d3.js – an original rCharts Sankey diagram demo by @timelyporfolio, and a more recent HTMLWidgets demo (sankeyD3).

Here’s an example of the evolution of my Sankey diagram in R using googleVis – the Rmd code is here and a version of the knitred HTML output is here.

The original data comprised a matrix relating population flows between English regions, Wales, Scotland and Northern Ireland. The simplest rendering of the data using the googleViz Sankey diagram generator produces an output that uses default colours to label the nodes.

Using the country code indicator at the start of each region/country identifier, we can generate a mapping from country to a country colour that can then be used to identify the country associated with each node.

One of the settings for the diagram allows the source (or target) node colour to determine the edge colour. We can also play with the values we use as node labels:

If we exclude edges relating to flow between regions of the same country, we get a diagram that is more reminiscent of Oli’s orignal (country level) demo. Note also that the charts that are generated are interactive – in this case, we see a popup that describes the flow along one particular edge.

If we associate a country with each region, we can group the data and sum the flow values to produce country level flows. Charting this produces a chart similar to the original inspiration.

As well as providing the code for generating each of the above Sankey diagrams, the Rmd file linked above also includes demonstrations for generating basic Sankey diagrams for the original dataset using the rCharts and htmlwidgets R libraries.

In order to provide a point of comparison, I also generated a python/pandas workflow using Jupyter notebooks and the ipysankey widget. (In fact, I generated the full workflow through the different chart versions first in pandas – I find it an easier language to think in than R! – and then used that workflow as a crib for the R version…)

The original notebook is here and an example of the HTML version of it here. Note that I tried to save a rasterisation of the widgets but they don’t seem to have turned out that well…

The original (default) diagram looks like this:

and the final version, after a bit of data wrangling, looks like this:

Once again, all the code is provided in the notebook.

One of the nice things about all these packages is that they produce outputs than can be reused/embedded elsewhere, or that can be used as a first automatically produced draft of code that can be tweaked by hand. I’ll have more to say about that in a future post…

Reporting in a Repeatable, Parameterised, Transparent Way

Earlier this week, I spent a day chatting to folk from the House of Commons Library as a part of a temporary day-a-week-or-so bit of work I’m doing with the Parliamentary Digital Service.

During one of the conversations on matters loosely geodata-related with Carl Baker, Carl mentioned an NHS Digital data set describing the number of people on a GP Practice list who live within a particular LSOA (Lower Super Output Area). There are possible GP practice closures on the Island at the moment, so I thought this might be an interesting dataset to play with in that respect.

Another thing Carl is involved with is producing a regularly updated briefing on Accident and Emergency Statistics. Excel and QGIS templates do much of the work in producing the updated documents, so much of the data wrangling side of the report generation is automated using those tools. Supporting regular updating of briefings, as well as answering specific, ad hoc questions from MPs, producing debate briefings and other current topic briefings, seems to be an important Library activity.

As I’ve been looking for opportunities to compare different automation routes using things like Jupyter notebooks and RMarkdown, I thought I’d have a play with the GP list/LSOA data, showing how we might be able to use each of those two routes to generate maps showing the geographical distribution, across LSOAs at least, for GP practices on the Isle of Wight. This demonstrates several things, including: data ingest; filtering according to practice codes accessed from another dataset; importing a geoJSON shapefile; generating a choropleth map using the shapefile matched to the GP list LSOA codes.

The first thing I tried was using a python/pandas Jupyter notebook to create a choropleth map for a particular practice using the folium library. This didn’t take long to do at all – I’ve previously built an NHS admin database that lets me find practice codes associated with a particular CCG, such as the Isle of Wight CCG, as well as a notebook that generates a choropleth over LSOA boundaries, so it was simply a case of copying and pasting old bits of code and adding in the new dataset.You can see a rendered example of the notebook here (download).

One thing you might notice from the rendered notebook is that I actually “widgetised” it, allowing users of the live notebook to select a particular practice and render the associated map.

Whilst I find the Jupyter notebooks to provide a really friendly and accommodating environment for pulling together a recipe such as this, the report generation workflows are arguably still somewhat behind the workflows supported by RStudio and in particular the knitr tools.

So what does an RStudio workflow have to offer? Using Rmarkdown (Rmd) we can combine text, code and code outputs in much the same way as we can in a Jupyter notebook, but with slightly more control over the presentation of the output.

__dropbox_parlidata_rdemos_-_rstudio

For example, from a single Rmd file we can knit an output HTML file that incorporates an interactive leaflet map, or a static PDF document.

It’s also possible to use a parameterised report generation workflow to generate separate reports for each practice. For example, applying this parameterised report generation script to a generic base template report will generate a set of PDF reports on a per practice basis for each practice on the Isle of Wight.

The bookdown package, which I haven’t played with yet, also looks promising for its ability to generate a single output document from a set of source documents. (I have a question in about the extent to which bookdown supports partially parameterised compound document creation).

Having started thinking about comparisons between Excel, Jupyter and RStudio workflows, possible next steps are:

  • to look for sensible ways of comparing the workflow associated with each,
  • the ramp-up skills required, and blockers (including cultural blockers (also administrative / organisational blockers, h/t @dasbarrett)) associated with getting started with new tools such as Jupyter or RStudio, and
  • the various ways in which each tool/workflow supports: transparency; maintainability; extendibility; correctness; reuse; integration with other tools; ease and speed of use.

It would also be interesting to explore how much time and effort would actually be involved in trying to port a legacy Excel report generating template to Rmd or ipynb, and what sorts of issue would be likely to arise, and what benefits Excel offers compared to Jupyter and RStudio workflows.

Python Code Stepper / Debugger / Tutor for Jupyter Notebooks – nbtutor

Whilst reviewing / scoping* possible programming editor environments for the new level 1 courses, one of the things I was encouraged to look at was Philip Guo’s interactive Python Tutor.

According the the original writeup (Philip J. Guo. Online Python Tutor: Embeddable Web-Based Program Visualization for CS Education. In Proceedings of the ACM Technical Symposium on Computer Science Education (SIGCSE), March 2013), the application has an HTML front end that calls on on a backend debugger: “the Online Python Tutor backend takes the source code of a Python program as input and produces an execution trace as output. The backend executes the input program under supervision of the standard Python debugger module (bdb), which stops execution after every executed line and records the program’s run-time state.”

The current version of the online tutor supports a wider range of languages – Python, Java, JavaScript, TypeScript, Ruby, C, and C++ – which presumably have their own backend interpreter and use a common trace response format?

The tutor itself allows you to step through code snippets a line at a time, displaying a trace of the current variable values.

Another nice feature of the Online Python Tutor, though it was a bit ropey when I first tried it out a few months ago, was the shared session support, that a learner and a tutor see, via a shared link, the same session, with an additional chat box allowing them to chat over the shared experience in realtime.

Whilst the Online Python Tutor allows URLs to saved programs (“tutorials”) to be generated and shared: link to the demo shown in the movie above. The code is actually passed via the URL.

One of the problems with the Online Python Tutor is that requires a network connection so that the code can be passed to the interpreter back end, executed to generate the code trace, and then passed back to the browser. It didn’t take long for folk to start embedding the tutor in an iframe to give a pseudo-traceability experience in the notebook context, but now the Online Python Tutor inspired nbtutor extension makes cell based tracing against the local python kernel possible**.

The nbtutor extension provides cell by cell tracing (when running a cell, all the code in the cell is executed, the trace returned, and then available for visualising. Note that all variables in scope are displayed in the trace, even if they have been set in other cells outside of the nbtutor magic. (I’m not sure if there’s a setting that allows you just to display the variables that are referenced within the cell?)  It is also possible to clear all variables in the global scope via a magic parameter, with a prompt to confirm that you really do want to clear out all those variable values.

I’m not sure that the best way would be to go about framing nbtutor exercises in a Jupyter notebook context, but I note that the notebooks used to support the MPR213 (Programming and Information Technology) course from the Department of Mechanical and Aeronautical Engineering in the Faculty of Engineering, Built Environment and Information Technology at the University of Pretoria now include nbtutor examples.

Footnotes:

* A cynic might say scoping in the sense not seriously considering anything other than the environments that had already been decided on before the course production process had really started… ;-) I also preferred BlockPy over Scratch, for example. My feeling was that if the OU was going to put developer effort in (the original claim was we wouldn’t have to put effort into Scratch, though of course we are because Scratch wasn’t quite right…) we could add more value to the OU and the community by getting involved with BlockPy, rather than a programming environment developed for primary school kids. Looking again at the “friendly” error messages that the BlockPy environment offers, I’m starting to wondering if elements of that could be reused for some IPython notebook magic…

** Again, I’m of the mind that were it 20 years ago, porting the Online Python Tutor to the Jupyter notebook context might have been something we’d have considered doing in the OU…

Displaying Differences in Jupyter Notebooks – nbdime / nbdiff

One of the challenges of working with Jupyter notebooks to date has been the question of diffing, spotting the differences between two versions of the same notebook. This made collaborative authoring and reviewing of notebooks a bit tricky. It also acted as a brake on using notebooks for student assessment. It’s easy enough to to set an exercise using a templated notebook and then get students to work through it, but marking the completed notebook in return can be a bit fiddly. (The nbgrader system addresses this in part but at the expense of the overhead in terms of having to use additional nbgrader formatting and markup.)

However, there’s ongoing effort now around nbdime (docs). Given past success in getting Jupyter notebook previews displayed in Github, it wouldn’t be unreasonable to think that the diff view too might make it into that environment at some point too…

At the moment, nbdime works from the command line. It can produce a text diff in the console, or launch a notebook viewer in the browser that shows differences between two notebooks.

The differ works on a cell by cell basis and highlights changes and addtions. (Extra emphasis on the changed text in a markdown cell doesn’t seem to work at the moment?)

nbdime_-_diff_and_merge_your_jupyter_notebooks

If you change the contents of a code cell, or the outputs of a code cell have changed, those differences are identified too. (Note the extra emphasis in the code cell on the changed text, but not in the output.)

nbdime_-_diff_and_merge_your_jupyter_notebooks2

To improve readability, you can collapse the display of changed code cell output.

nbdime_-_diff_and_merge_your_jupyter_notebooks3

Where cell outputs include graphical objects, differences to these are highlighted too.

nbdime_-_diff_and_merge_your_jupyter_notebooks4

(Whilst I note that Github has various tools for exploring the differences between two versions of the same image, I suspect that sort of comparison will be difficult to achieve inline in the notebook differencer.)

I suspect one common way of using nbdime will be to compare the current state of a notebook with a checkpointed version. (Jupyter notebooks autosave the current state of the notebook quite regulalry. If you force a save, the current state is saved but a “checkpoint” version of the notebook is also saved to a hidden folder. If things go really wrong with your current notebook, you can restore it to the checkpointed version.)

If you’ve saved a checkpoint of a notebook, and want to compare the current (autosaved) version with it, you need to point to the checkpointed file in the checkpoint folder: nbdiff-web .ipynb_checkpoints/MY_FILE-checkpoint.ipynb MY_FILE.ipynb. It’d be nice if a switch could handle this automatically, eg nbdime_web --compare-checkpoint MY_FILE.ipynb (It would also be nice if the nbdiff command could force the notebook to autosave before a diff is run, but I’m not sure how that could be achieved?)

It also strikes me that when restoring from a checkpoint, it might be possible to combine the restoration action with the differencer view so that you can decide which bits of the current notebook you might want to keep (i.e. essentially treat the differences between the current and checkpointed version as conflicts that need to be resolved?)

This is probably pushing things a bit far, but I also wonder if lightweight, inline, cell level differencing would be possible, given that each cell in at running notebook has an undo feature that goes back multiple streps?

Finally, a note about using the differencer to support marking. The differencer view is an HTML file, so whilst you can compare a student’s notebook with the orignal  you can’t edit their notebook directly in the differencer to add marks or feedback. (I really do need to have another play with nbgrader, I think…)

PS It’s also worth noting that SageMathCloud has a history slider that lets you run over different autosaved versions of a notebook, although differences are not highlighted.

PPS Thinks: what I’d like is a differencer that generates a new notebook with addition/deletion cells highlighted and colour styled so that I could retain – or delete – the cell and add cells of my own… Something akin to track changes, for example. That way I could run different cells, add annotations, etc etc (related issue).

A Recipe for Automatically Going From Data to Text to Reveal.js Slides

Over the last few years, I’ve experimented on and off with various recipes for creating text reports from tabular data sets, (spreadsheet plugins are also starting to appear with a similar aim in mind). There are several issues associated with this, including:

  • identifying what data or insight you want to report from your dataset;
  • (automatically deriving the insights);
  • constructing appropriate sentences from the data;
  • organising the sentences into some sort of narrative structure;
  • making the sentences read well together.

Another approach to humanising the reporting of tabular data is to generate templated webpages that review and report on the contents of a dataset; this has certain similarities to dashboard style reporting, mixing tables and charts, although some simple templated text may also be generated to populate the page.

In a business context, reporting often happens via Powerpoint presentations. Slides within the presentation deck may include content pulled from a templated spreadsheet, which itself may automatically generate tables and charts for such reuse from a new dataset. In this case, the recipe may look something like:

exceldata2slide

#render via: http://blockdiag.com/en/blockdiag/demo.html
{
  X1[label='macro']
  X2[label='macro']

  Y1[label='Powerpoint slide']
  Y2[label='Powerpoint slide']

   data -> Excel -> Chart -> X1 -> Y1;
   Excel -> Table -> X2 -> Y2 ;
}

In the previous couple of posts, the observant amongst you may have noticed I’ve been exploring a couple of components for a recipe that can be used to generate reveal.js browser based presentations from the 20% that account for the 80%.

The dataset I’ve been tinkering with is a set of monthly transparency spending data from the Isle of Wight Council. Recent releases have the form:

iw_transparency_spending_data

So as hinted at previously, it’s possible to use the following sort of process to automatically generate reveal.js slideshows from a Jupyter notebook with appropriately configured slide cells (actually, normal cells with an appropriate metadata element set) used as an intermediate representation.

jupyterslidetextgen

{
  X1[label="text"]
  X2[label="Jupyter notebook\n(slide mode)"]
  X3[label="reveal.js\npresentation"]

  Y1[label="text"]
  Y2[label="text"]
  Y3[label="text"]

  data -> "pandas dataframe" -> X1  -> X2 ->X3
  "pandas dataframe" -> Y1,Y2,Y3  -> X2 ->X3

  Y2 [shape = "dots"];
}

There’s an example slideshow based on October 2016 data here. Note that some slides have “subslides”, that is, slides underneath them, so watch the arrow indicators bottom left to keep track of when they’re available. Note also that the scrolling is a bit hit and miss – ideally, a new slide would always be scrolled to the top, and for fragments inserted into a slide one at a time the slide should scroll down to follow them).

The structure of the presentation is broadly as follows:

demo_-_interactive_shell_for_blockdiag_-_blockdiag_1_0_documentation

For example, here’s a summary slide of the spends by directorate – note that we can embed charts easily enough. (The charts are styled using seaborn, so a range of alternative themes are trivially available). The separate directorate items are brought in one at a time as fragments.

testfullslidenotebook2_slides1

The next slide reviews the capital versus expenditure revenue spend for a particular directorate, broken down by expenses type (corresponding slides are generated for all other directorates). (I also did a breakdown for each directorate by service area.)

The items listed are ordered by value, and taken together account for at least 80% of the spend in the corresponding area. Any further items contributing more than 5%(?) of the corresponding spend are also listed.

testfullslidenotebook2_slides2

Notice that subslides are available going down from this slide, rather than across the mains slides in the deck. This 1.5D structure means we can put an element of flexible narrative design into the presentation, giving the reader an opportunity to explore the data, but in a constrained way.

In this case, I generated subslides for each major contributing expenses type to the capital and revenue pots, and then added a breakdown of the major suppliers for that spending area.

testfullslidenotebook2_slides3

This just represents a first pass at generating a 1.5D slide deck from a tabular dataset. A Pareto (80/20) heurstic is used to try to prioritise to the information displayed in order to account for 80% of spend in different areas, or other significant contributions.

Applying this principle repeatedly allows us to identify major spending areas, and then major suppliers within those spending areas.

The next step is to look at other ways of segmenting and structuring the data in order to produce reports that might actually be useful…

If you have any ideas, please let me know via the comments, or get in touch directly…

PS FWIW, it should be easy enough to run any other dataset that looks broadly like the example at the top through the same code with only a couple of minor tweaks…

An Alternative Way of Motivating the Use of Functions?

At the end of the first of the Curriculum Development Hackathon on Reproducible Research using Jupyter Notebooks held at BIDS in Berkeley, yesterday, discussion turned on whether we should include a short how-to on the use of interactive IPython widgets to support exploratory data analysis. This would provide workshop participants with an example of how to rapidly prototype a simple exploratory data analysis application such as an interactive chart, enabling them to explore a range of parameter values associated with the data being plotted in a convenient way.

In summarising how the ipywidgets interact() function works, Fernando Perez made a comment that made wonder whether we could use the idea of creating simple interactive chart explorers as a way of motivating the use of functions.

More specifically, interact() takes a function name and the set of parameters passed into that function and creates a set of appropriate widgets for setting the parameters associated with the function. Changing the widget setting runs the function with the currently selected values of the parameters. If the function returns a chart object, then the function essentially defines an interactive chart explorer application.

So one reason for creating a function is that you may be able to automatically convert into an interactive application using interact().

Here’s a quick first sketch notebook that tries to set up a motivating example: An Alternative Way of Motivating Functions?

PS to embed an image of a rendered widget in the notebook, select the Save notebook with snapshots option from the Widgets menu:

simplewidgetdemo

See also: Simple Interactive View Controls for pandas DataFrames Using IPython Widgets in Jupyter Notebooks