Fragment: Keeping an Eye on What’s Trackable, Where, and When — Tools for Data Protection Officers as well as the Rest of Us?

Way back when, in the early days of FOI and then “open data”, I naively believed that open data and FOI contact points in organisations would act on as advocates for us outside the organisation getting access to information from inside the organisation. The reality seems to be that as appointees and employees of the organisation, those individuals instead become gatekeepers and often seem to act to find ways of defending the organisation against such requests rather than trying to open the organisation up to them.

When it comes to those appointed to oversee data protection and data privacy issues, I would like to think that whoever is appointed such a role sees it as the role of an advocate for those who work for or come into contact with the organisation, as well as providing an opportunity to aggressively defend the rights of those outside the organisation against the unnecessary and disproportionate collection, processing and sharing of data about them by the organisation. That said, I suspect in many cases the role is more about trying to make sure the company doesn’t get sued under GDPR.

Whilst it would also be nice to think that the data protection person is a geek w/ skillz who can hack their way around an organisation’s systems and websites, poking around to find things that shouldn’t be there and demonstrating how other things can be potentially misused, I suspect they aren’t.

So do we need tools for such officers to keep tabs on their organisation, or perhaps tools to help privacy advocates provide oversight of them?

Poking around traffic generated as I visited the OU VLE a week or two ago, I saw a couple of requests I thought were unnecessary and raised an internal query about them. But it also got me thinking…

The requests appear to be made from tags loaded into the web page using the Google Tag Manager. The Google Tag Manager code appears to be delivered via a gtm.js script with the structure:

{
  "resource": {
    "version": "XXX",
    "macros": [ {} ],
    "tags": [ {} ],
    "predicates": [{}],
    "rules": [ {} ]
  },
  "runtime": [ [], [] ]
}

followed byb a chunk of Javascript code.

The gtm.js file includes rules of the form [["if",1,31],["unless",34,35],["add",51]] that appear to index into the predicates list in the conditional part (logically or’d tests?) and then add a particular tag, which may reference a macro, when the condition is met.

Predicates take the form:

      "function":"_re",
      "arg0":["macro",0],
      "arg1":"^http(s)?:\\\/\\\/(www\\.)?open.ac.uk\\\/?(index.html)?($|\\?)",
      "ignore_case":true
    }

Tags can take a variety of forms, including:

      {
      "function":"__html",
      "once_per_event":true,
      "vtp_html":"\n\u003Cscript type=\"text\/gtmscript\"\u003E!function(b,e,f,g,a,c,d){b.fbq||(a=b.fbq=function(){a.callMethod?a.callMethod.apply(a,arguments):a.queue.push(arguments)},b._fbq||(b._fbq=a),a.push=a,a.loaded=!0,a.version=\"2.0\",a.queue=[],c=e.createElement(f),c.async=!0,c.src=g,d=e.getElementsByTagName(f)[0],d.parentNode.insertBefore(c,d))}(window,document,\"script\",\"https:\/\/connect.facebook.net\/en_US\/fbevents.js\");fbq(\"init\",\"870490019710405\");fbq(\"track\",\"PageView\");\u003C\/script\u003E\n\u003Cnoscript\u003E\n\u003Cimg height=\"1\" width=\"1\" src=\"https:\/\/www.facebook.com\/tr?id=870490019710405\u0026amp;ev=PageView\n\u0026amp;noscript=1\"\u003E\n\u003C\/noscript\u003E\n\n\n",
      "vtp_supportDocumentWrite":false,
      "vtp_enableIframeMode":false,
      "vtp_enableEditJsMacroBehavior":false,
      "tag_id":51
    }

And macros take the form:

{
      "function":"__gas",
      "vtp_cookieDomain":"auto",
      "vtp_doubleClick":false,
      "vtp_setTrackerName":false,
      "vtp_useDebugVersion":false,
      "vtp_useHashAutoLink":false,
      "vtp_decorateFormsAutoLink":false,
      "vtp_enableLinkId":false,
      "vtp_enableEcommerce":false,
      "vtp_trackingId":"UA-4391747-17",
      "vtp_enableRecaptchaOption":false,
      "vtp_enableUaRlsa":false,
      "vtp_enableUseInternalVersion":false
    }

So what I’m wondering is: is there an offline, static analyser for gtm.js scripts that would allow someone to point to a website form a which a gtm.js script be downloaded and then lets them generate human readable reports that:

  • identify in general which trackers are loaded by which rules on which events with what arguments; and
  • identify which trackers are loaded by which rules on which events with what arguments for a specific URL.

This would then allow a university data protection officer, for example, or a student, to provide a URL, such as a URL into the VLE, and get a simple, statically generated report back that shows what trackers are loaded when visiting that environment.

Which is simpler than running Ghostery or opening developer tools in a wide open by default browser like Chrome, rather than the rather more privacy defending Firefox, for example, and searching the network logs for incriminating evidence.

Google Tag Manager has been around for some time, and I’m assuming that organisational web folk have read each line of code in the gtm.js they load into user’s browsers to make sure that it’s not doing anything untoward. (That everyone else uses it is no excuse, unless perhaps it meets some sort of international software quality standard that folk can just embed it without looking at it.)

So I’m wondering:

  • is there a line by line annotated version of the code at the bottom of the gtm.js script anywhere?
  • are there line by line examples out there of a simple gtm.js script and how to read it / analyse it (so eg walking through: this rule says this, which adds that tag, which is then parsed this way?)
  • are there static gtm.js analysers out there that generate the static reports suggested above and that allow folk to analyse arbitrary gtm.js scripts that are loaded into their browser in many of the sites they visit?

Intercepting JSON HTTP Responses to Web Browser Page Requests Using MITMProxy

Coming back from a week or so away, the car let us down with a ruptured water hose which sent my confidence / mental state tanking, albeit with the AA managing a  quick fix with some new-to-me water activated tape along the lines of this . (It’s bad enough requiring a call out, but the stress is multiplied when you live on an island!)

My reboot strategy was to have a quick play with data from the weekend’s WRC rally, but when my datagrabber failed, it tanked my mood further and resulted in an 18 hour not-moving / not eating manic coding stretch that I’m still bleary eyed from.

The problem stemmed from a couple of things that interacted enough to confuse me. One was a pandas update that changed the behaviour of the json_normalize function I was using to unpack JSON values, and the other was the behaviour of the WRC server I pull data from (probably in breach of terms and conditions) which erratically kept giving a NULL/404 response to valid requests.

I’m not sure if the server behaviour was a defensive measure against scraping on the part of the publisher or if it’s an issue with the cache service I’m pulling from (certainly, hitting the same URL could give a valid data response, then nothing for the next few hits, then a response again). I tweaked my Python requests scraper code by adding some header info to spoof a browser user-agent, as well as tweaking the request period to make it play a bit nicer, but still the erratic 404s appeared at any ever greater rate.

Loading pages via a browser works okay, with the JSON requests being handled correctly, so I could just scrape the HTML tables that I think the JSON is used to populate (else: why load it?), although I have noted in the past that the JSON data structures have more data fields than are displayed in the WRC live timing HTML tables.

Then I started wonder about how to automate the collection of those requests using a browser automation route, with Selenium handling page selections and something grabbing the data perhaps from the browser devtools har archive (right click on a recorded entry in the devtools network listing to save all of them to a har archive).

The har archive itself is a JSON file, so that’s quite easy to work with, but the Chrome export (I think) is everything, not just filtered requests as in the screenshot above. Firefox seems to let you filter network items and just export filtered ones to a har file, which you can then open as a json file, filtering on the url to identify the request(s) of interest.

In passing, I also note that the har-extractor tool makes previewing life easier in the way it unpacks requests from a har archive into discrete files in a (nested) directory structure.

I also notice just now that the Firefox developer tools also seem to have a websocket sniffer, which could be handy… [UPDATE: I think that may be an extension I installed…]  FWIW, I also started trying to get my head around web sockets in a generic Python context when I was trying to come up with a simple MyBinder client (see here): A Minimal Python Client for MyBinder.

Whilst the Firefox route looked promising, I wasn’t sure how automatable it would be: whilst selenium-py would let me script lots of link clicking in the WRC site, I’m not sure it provides an API to browser dev tools?

I did find one tool I thought looked interesting, selenium-wire, but it seems to only capture headers, not payloads, of requests made from the selenium scripted browser:

#!pip3 install selenium-wire
from seleniumwire import webdriver 

# Create a new instance of the Firefox driver
driver = webdriver.Firefox()

# Go to a WRC live timing page
driver.get('https://www.wrc.com/en/wrc/livetiming/page/4175----.html')

# Access requests via the `requests` attribute
for request in driver.requests:
    if request.response:
        if 'sasCache' in request.path:
            print(
                request.path,
                request.response.status_code,
                request.response.headers,
                request.body,
                '\n'
            )

I can see how that might be handy for capturing the addresses of resources loaded by a page, but I want the actual gzipped JSON data that forms the request content for the resources I’m interested in…

UPDATE: seems like selenium-wire is totally up to the job:

Poking around further, it seems the best approach is to use a proxy that can grab traffic as required. There are lots of partial clues as to what to use out there, many of them referring to browsermob and the BrowserMob Proxy Python client, but no full recipes.

Another proxy that looked a bit easier to use, with a Python base and more powerful in the way you can script it, is mitmproxy (“man-in-the-middle proxy”).

Again, the docs and recipes seem to be a bit scattered, so here’s a complete recipe that worked for me…

Start off by installing the proxy and getting it running. You can do this on the command line with:

pip3 install mitmproxy

mitmdump -w test1
#This will dump intercepted requests into
#    the file: test1

#close with: ctrl-c

On my local install, but not MyBinder?, I can actually run this as a background job from a notebook code cell using cell block magic:

%%script bash --bg
mitmdump -w test1

To stop the background process, we could look up the process number from a code cell and then kill that process by process ID (kill PROCESSID):

#Process numbers 
! ps -e | grep 'mitmdump' | awk '{print $1 " " $4}']

or let the (Linux) machine do it…

#Or to kill eg
!kill $(ps -e | grep 'mitmdump' | awk '{print $1}' )

In a notebook, we can then launch a (optionally, headless) selenium controlled browser:

from selenium import webdriver

PROXY = "localhost:8080" # IP:PORT or HOST:PORT

chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % PROXY)
chrome_options.add_argument("--headless") 

chrome = webdriver.Chrome(chrome_options=chrome_options)
chrome.get("https://www.wrc.com/en/wrc/livetiming/page/4175----.html")

chrome.close()

We can view the result using the mitmweb browser app (from the command line: mitmweb). (Note that I think this also runs the proxy…? Again, ctrl-c to kill it.)

We can load the file we collected within the web app and then filter the requests to ones of interest:

Selecting a request shows us the contents of the request response, which is to say: the JSON data I’m after…

So this is starting to look promising…

…even more so when we realise we can filter a collected set of resources using the mitmdump command, for example with a construction of the form:

mitmdump -nr test1 -w test4 "~u .*sasCacheApi.*"

which will filter the archived collection in test1 using any desired filters (eg. "~u .*sasCacheApi.*") to create the filtered set in test4.

We can also add filters when running mitmdump to collect requests, For example:

mitmdump -w test5 "~u .*sasCacheApi.*"

will only capture and dump intercepted requests from locations with the desired address pattern into the file test5.

I’ve now got a pattern that could be used to scrape lots of JSON files:

  • set up the mitmproxy with appropriate filters to collect just the files I want,
  • script selenium to load the desired web page and click through various bits of it to make sure all the resources I want are loaded *(not addressed here; I need to do a post on scraping with Selenium-py; for now, here’s an example of using it to [do some repetitive work](https://blog.ouseful.info/2019/01/21/bulk-notebook-uploads-to-nbgallery-using-selenium/)…)*,
  • and then… then what? Parse the resource collection, that’s what…

Here’s an initial fragment for how to do that.

First, we can preview the headers for intercepted resources:

from mitmproxy import io
from mitmproxy.net.http.http1.assemble import assemble_request

def response(flow):
    print(assemble_request(flow.request).decode('utf-8'))

with open('test4', "rb") as logfile:
    freader = io.FlowReader(logfile)
    for f in freader.stream():
        response(f)

We can inspect what has been captured by getting the state of a flow object:

f.get_state()

This actually returns a python dict, the keys for which we can easily preview: f.get_state().keys()

Of particular interest is the response:

f.get_state()['response']

We note that the content is compressed / gzipped, so we can uncompress that…

import gzip
text = gzip.decompress(f.get_state()['response']['content'])
text

All that remains now is a tweak to the iteration through the response previewer (the response(flow) function defined above) to unzip and save each of them to a file. For example, something like:

import gzip
def response2(flow):
    fn = flow.get_state()['request']['path'].decode()
    fn = fn.split('=')[1].replace('%2F','_').replace('%3F','_').replace('%3D','_')
    print('Saving file: {}'.format(fn))
    with open('{}.json'.format(fn),'wb') as outfile:
        outfile.write( gzip.decompress(flow.get_state()['response']['content']) )

with open('test4', "rb") as logfile:
    freader = io.FlowReader(logfile)
    for f in freader.stream():
        response2(f)

I’m still not feeling right happy / in control, though… F****g Boris…

PS as to why scrape the data? For generating things like these Stage Charts for WRC Rally Sweden.

Neo4J Graph Database Running in MyBinder

Earlier today, I spotted this rather handy Global Witness repo that includes data ingest and analysis around the UK Persons of Significant Control register using Neo4J.

In part it reminded me of my own early explorations around the PSC register, as well as previous attempts at setting up linked Jupyter notebook and RStudio environments linked to neo4J, such as  Getting Started With the Neo4j Graph Database – Linking Neo4j and Jupyter SciPy Docker Containers Using Docker Compose and this one on Accessing a Neo4j Graph Database Server from RStudio and Jupyter R Notebooks Using Docker Containers.

I’ve also previously explored how to run Postgres server in a Binderised / MyBinder environment — Running a PostgreSQL Server in a MyBinder Container — so it seemed only natural to see if I could launch neo4J in a MyBinder environment too.

Setting things up to work with a Python client is easy enough — see this templated, Binderised repo — binder-neo4j — although at the moment I can’t seem to get the neo4j web UI to work with jupyter-server-proxy.

Now I’m wondering whether I should try to put together a Binderised repo that uses Sam Leon’s Global Witness repo scripts to ingest the PSC data into a Binderised repo to make running graph queries over the PSC data possible in a Mybinder environment…

View demo in MyBinder: https://mybinder.org/v2/gh/psychemedia/binder-neo4j/master?filepath=py%2Fneo4j-demo.py

Live By Machine – CircleCI and Docker Hub AutoBuilds

Some time ago I put together a recipe for creating a simple data analysis workbench around the ergast F1 data using Chris Newell’s ergast API: Setting up a Containerised Desktop API server (MySQL + Apache / PHP 5) for the ergast Motor Racing Data API.

All the ingredients are in this repo.

The ergast Docker container is built using an automated Docker build whenever the Github repo is updated.

Following on from Simon Willison’s recipe for generating a commit log for San Francisco’s official list of trees, a scraper hosted on Github that does a daily scrape using CircleCI and then commits updates back to the repo, I’ve also set CircleCI to run against my repo using a daily cron job that copies the latest version of the ergast MySQL db file from the ergast website and then commits it to the repo.

The .cricleci/config.yml file is pretty much a straight rip-off of Simon’s:

version: 2
jobs:
  fetch_and_commit:
    docker:
      - image: circleci/python:3.6.4
    steps:
      - checkout
      - run:
          command: |
            cp ergastdb/data/f1db.sql.gz ergastdb/data/f1db-old.sql.gz
            curl -o ergastdb/data/f1db.sql.gz "http://ergast.com/downloads/f1db.sql.gz"
            git add ergastdb/data/f1db.sql.gz
            git config --global user.email "ergastbot@example.com"
            git config --global user.name "ergastbot"
            git commit -m "Daily update..." && \
              git push -q https://${GITHUB_PERSONAL_TOKEN}@github.com/psychemedia/ergast-f1-api.git master \
              || true
workflows:
  version: 2
  build:
    jobs:
      - fetch_and_commit
  nightly:
    triggers:
      - schedule:
          cron: "0 0 * * *"
          filters:
            branches:
              only:
                - master
    jobs:
      - fetch_and_commit

The Github Personal Access Token was set up just with permissions to access my public repos:

I then used the value of the token for the GITHUB_PERSONAL_TOKEN environmental variable in the appropriate CircleCI project:

What this means is that the Ergast API container should be regularly rebuilt, automatically, using a regularly updated copy of the ergast database.

Public CircleCI build logs can be found here:

https://circleci.com/api/v1.1/project/github/psychemedia/ergast-f1-api/

Add an optional build count (an integer) at the end of the URL for specific build details.

Fragment – Jupyter Kernels / MyBinder as a Remote Code Execution Sandbox for Moodle

Although I don’t know for sure, I suspect that administrators of computing infrastructure in educational establishments are wary of requests from academics for compute services that allow students to run arbitrary code.

One of the main reasons why an educator would want to support this is that becuase setting up an environment can be hard: if you want a student to focus on writing code that makes use of particular packages, you probably don’t want them engaging in arcane sys admin practices and spending all them time trying to install those packages in the first place.

For the IT department, the thought of running arbitrary code that could be produced either by novices or deliberately malicious users is likely to raise several well-founded concerns: how do we stop users using the code environment to attack the server or network the code is running on; how do we stop folk from running code on out servers that could be used to attack external sites; and how do we control the resource requirements (storage, compute, network) when mistakes happen and folk try to repeatedly download the internet to our server.

One way of making hosted compute available to students is to execute code within isolated sandboxed environments that you can park in a safe area of the network and monitor closely.

In our Moodle VLE, the Moodle CodeRunner environment is used to allow students to run small fragments of code within just such an environment when completing interactive quiz questions. (I provide a quick review of the Moodle CodeRunner plugin in post [A] Quick First Look At Moodle CodeRunner.)

Presumably, someone somewhere has done a security audit and decided that the sandboxed code execution environment is a safe one and signed off on its use.

Another approach, described in this fragment on Jupyter Notebooks and Moodle, the SageCell filter for Moodle, allows you to run code against an external (stateless) SageCell server:

<?php
/**
 * SageCell filter for Moodle 3.4+
 *
 *  This filter will replace any Sage code in [sage]...[/sage]
 *  with a Ajax code from http://sagecell.sagemath.org
 *
 * @package    filter_sagecell
 * @copyright  2015-2018 Eugene Modlo, Sergey Semerikov
 * @license    http://www.gnu.org/copyleft/gpl.html GNU GPL v3 or later
 */

defined('MOODLE_INTERNAL') || die();

/**
 * Automatic SageCell embedding filter class.
 *
 * @package    filter_sagecell
 * @copyright  2015-2016 Eugene Modlo, Sergey Semerikov
 * @license    http://www.gnu.org/copyleft/gpl.html GNU GPL v3 or later
 */
class filter_sagecell extends moodle_text_filter {

    /**
     * Check text for Sage code in [sage]...[/sage].
     *
     * @param string $text
     * @param array $options
     * @return string
     */
    public function filter($text, array $options = array()) {

        if (!is_string($text) or empty($text)) {
            // Non string data can not be filtered anyway.
            return $text;
        }

        if (strpos($text, '[sage]') === false) {
            // Performance shortcut - if there is no </a> tag, nothing can match.
            return $text;
        }

        $newtext = $text; // Fullclone is slow and not needed here.

        $search = '/\[sage](.+?)\[\/sage]/is';
        $newtext = preg_replace_callback($search, 'filter_sagecell_callback', $newtext);

        if (is_null($newtext) or $newtext === $text) {
            // Error or not filtered.
            return $text;
        }

        return $newtext;
    }

}

/**
 * Replace Sage code with embedded SageCell, if possible.
 *
 * @param array $sagecode
 * @return string
 */
function filter_sagecell_callback($sagecode) {

    // SageCell code from [sage]...[/sage].
    $output = $sagecode[1];
    $output = str_ireplace("", "\n", $output);
    $output = str_ireplace("

", "\n", $output);
    $output = str_ireplace("
", "\n", $output);
    $output = str_ireplace("
", "\n", $output);
    $output = str_ireplace("
", "\n", $output);
    $output = str_ireplace("&nbsp;", "\x20", $output);
    $output = str_ireplace("\xc2\xa0", "\x20", $output);
    $output = clean_text($output);
    $output = str_ireplace("&lt;", "", $output);

    $id = uniqid("");

    $output = "" .
    "" .
        "sagecell.makeSagecell({inputLocation: \"#" . $id . "\"," .
        "evalButtonText: \"Evaluate\"," .
        "autoeval: true," .
        "hide: [\"evalButton\", \"editor\", \"messages\", \"permalink\", \"language\"] }" .
    ");" .
    "" .
    "
<div id="">". $output. "</div>
";

    return $output;
}

This looks to me like the SageCell Moodle filter essentially rewrites a [sage]...[/sage] delimited code block within a Moodle environment as a Javascript backed SageCell form and then lets users run the code embedded in the form against the remote server. This sort of thing could presumably be used to support interactive, executable code activities within a Moodle hosted web page, for example.

As I remarked previously, it’s not hard to imagine doing something similar to provide a [mybinder repository="..."]...[/mybinder]​ filter that could use a Javascript library such as ThebeLab or Juniper to provide a similar style of interaction backed by a MyBinder launched repository, though minor tweaks may be required around those packages to handle stateless rather than stateful transactions if repeated calls are made to the server.

Going back to the CodeRunner plugin (as described here):

[i]nternally CodeRunner is designed to support multiple sandboxes, implemented as subclasses of the abstract class qtype_coderunner_sandbox – see sandbox.php. Sandboxes are essentially plugins to CodeRunner. Several different ones have been used over the years but the only current ones are the jobe sandbox (file jobesandbox.php) and the ideone sandbox. The latter interfaces to the Sphere On-line judge server but is now more-or-less defunct. Both of those sandboxes run as services. CodeRunner can support multiple sandboxes at the same time and questions can be configured to select a particular sandbox (if desired). By default the first available sandbox that supports the language required by the question is used.

So could we use a MyBinder launched Jupyter server to provide sandboxed code execution?

One advantage of this would be that we could define a Jupyter environment that students could use on their own machines, or that we could host via a hosted notebook server, and that same environment could be used for CodeRunner style assessment.

Another advantage would be that if we want to run student created arbitrary code for teaching activities as well as CodeRunner based assessment activities, we’d only need to sign off on one sandboxed code execution environment rather than several.

So what’s required?

It’s years since I had used PHP, but I thought I’d have a go at creating a simple Python client that would let me:

  • start a MyBinder server against a specified Github repo;
  • start a kernel;
  • run a small code sample in the kernel and get a code execution response back.

Cribbing heavily from juniper.js and this rather handy sagecell-client.py, I came up with a hacky recipe that works a minimal proof of concept here: mybinder_py_client-ipynb.

I think this is stateful, in that we execute several code blocks one after the other and exploit state in previous calls to the same kernel. It would probably also make sense to have a call that forces a new kernel for each code execution call, as well as providing a recipe for killing a kernel.

The next step in trying to use this approach for CodeRunner sandbox would presumably be to try to create a simple PHP based MyBinder client; then the next step would be to use that in a CodeRunner sandbox subclass.

But that’s out of scope for me atm…

Please let me know in the comments if you have a go at this… or know of any other Moodle / Jupyter integrations…

Fragment: Teaching Coding By Example, a Line of Code at a Time

One of the things I try to do in many of my demo Jupyter notebooks is explain what’s going on so that readers who aren’t (yet) Python programmers can hopefully form some understanding of what the code is doing.

This Simple demo notebook originally started out as a really quick notebook containing little more than code blocks that showed how to download and review some WEC (World Endurance Championsip) laptime data; but then I started iterating it, adding in more explanatory code steps,  prefaced by markdown text that tried to explain what the following line of code was going to do.

One of the ongoing debates we have in our TM351 Data Management and Analysis course is whether students need to know how to programme in Python to do the course, i.e. whether the module should have a Python programming course prerequisite, or at least a programming skill prerequisite (I argue in favour of no prerequisites).

Certainly, explaining each step of the code adds more words and makes each notebook a much longer read; but a lot of effective distance teaching does involve repetition and rehearsal.  The line by line, “explain what you’re want to do and how you’re going to do it; do it’ preview the output” approach also “unpacks” each line of code in a problem solving / goal directed context (“I want to do this, which requires that I have previously done that“).

Exploring Jupytext – Creating Simple Python Modules Via a Notebook UI

Although I spend a lot of my coding time in Jupyter notebooks, there are several practical problems associated with working in that environment.

One problem is that under version control, it can be hard to tell what’s changed. On the one hand, the notebook .ipynb format, which saves as a serialised JSON object, is hard to read cleanly:

The .ipynb format also records changes to cell execution state, including cell execution count numbers and changes to cell outputs (which may take the form of large encoded strings when a cell output is an image, or chart, for example:

Another issue arises when trying to write modules in a notebook that can be loaded into other notebooks.

One workaround for this is to use the notebook loading hack described in the official docs: Importing notebooks. This requires loading in a notebook loading module that then allows you to import other modules. Once the notebook loader module is installed, you can run things like:

  • import mycode as mc to load mycode.ipynb
  • `moc = __import__(“My Other Code”)` to load code in from `My Other Code.ipynb`

If you want to include code that can run in the notebook, but that is not executed when the notebook is loaded as a module, you can guard items in the notebook:

In this case, the if __name__=='__main__': guard will run the code in the code cell when run in the notebook UI, but will not run it when the notebook is loaded as a module.

Guarding code can get very messy very quickly, so is there an easier way?

And is there an easier way of using notebooks more generally as an environment for creating code+documentation files that better meet the needs of a variety of users? For example, I note this quote from Daniele Procida recently shared by Simon Willison:

Documentation needs to include and be structured around its four different functions: tutorials, how-to guides, explanation and technical reference. Each of them requires a distinct mode of writing. People working with software need these four different kinds of documentation at different times, in different circumstances—so software usually needs them all.

This suggests a range of different documentation styles for different purposes, although I wonder if that is strictly necessary?

When I am hacking code together, I find that I start out by writing things a line at a time, checking the output for each line, then grouping lines in a single cell and checking the output, then wrapping things in a function (for example of this in practice, see Programming in Jupyter Notebooks, via the Heavy Metal Umlaut). I also try to write markdown notes that set up what I intend to do (and why) in the following code cells. This means my development notebooks tell a story (of a sort) of the development of the functions that hopefully do what I actually want them to by the end of the notebook.

If truth be told, the notebooks often end up as an unholy mess, particularly if they are full of guard statements that try to separate out development and testing code from useful code blocks that I might want to import elsewhere.

Although I’ve been watching it for months, I’ve only started exploring how to use Jupytext in practice quite recently, and already it’s starting to change how I use notebooks.

If you install jupytext, you will find that if you click on a link to a markdown (.md)) or Python (.py), or a whole range of other text document types (.py, .R, .r, .Rmd, .jl, .cpp, .ss, .clj, .scm, .sh, .q, .m, .pro, .js, .ts, .scala), you will open the file in a notebook environment.

You can also open the file as a .py file, from the notebook listing menu by selecting the notebook:

and then using the Edit button to open it:

at which point you are presented with the “normal” text file editor:

One thing to note about the notebook editor view over the notebook is that you can also include markdown cells, as you might in any other notebook, and run code cells to preview their output inline within the notebook view.

However, whilst the markdown code will be saved into the Python file (as commented out code), the code outputs will not be saved into the Python file.

If you do want to be able to save notebook views with any associated code output, you can configure Jupytext to “pair” .py and .ipynb files (and other combinations, such as .py, .ipynb and .md files) such that when you save an open .py or .ipynb file from the notebook editing environment, a “paired” .ipynb or .py version of the file is also saved at the same time.

This means I could click to open my .py file in the notebook UI, run it, then when I save it, a “simple” .py file containing just code and commented out markdown is saved along with a notebook .ipynb file that also contains the code cell outputs.

You can configure Jupytext so that the pairing only works in particular directories. I’ve started trying to explore various settings in the branches of this repo: ouseful-template-repos/jupytext-md. You can also convert files on the command line; for example, <span class="s1">jupytext --to py Required\ Pace.ipynb will convert a notebook file to a python file.

The ability to edit Python / .py files, or code containing markdown / .md files in a notebook UI, is really handy, but there’s more…

Remember the guards?

If I tag a code cell using the notebook UI (from the notebook View menu, select Cell Toolbar and then Tags, you can tag a cell with a tag of the form active-ipynb:

See the Jupytext docs: importing Jupyter notebooks as modules for more…

The tags are saved as metadata in all document types. For example, in an .md version of the notebook, the metadata is passed in an attribute-value pair when defining the language type of a code block:

In a .py version of the notebook, however, the tagged code cell is not rendered as a code cell, it is commented out:

What this means is that I can tag cells in the notebook editor to include them — or not — as executable code in particular document types.

For example, if I pair .ipynb and .py files, whenever I edit either an .ipynb or .py file in the notebook UI, it also gets saved as the paired document type. Within the notebook UI, I can execute all the code cells, but through using tagged cells, I can define some cells as executable in one saved document type (.ipynb for example) but not in another (a .py file, perhaps).

What that in turn means is that when I am hacking around with the document in the notebook UI I can create documents that include all manner of scraggy developmental test code, but only save certain cells as executable code into the associated .py module file.

The module workflow is now:

  • install Jupytext;
  • edit Python files in a notebook environment;
  • run all cells when running in the notebook UI;
  • mark development code as active-ipynb, which is to say, it is *not active* in a .py file;
  • load the .py file in as a module into other modules or notebooks but leaving out the commented out the development code; if I use `%load_ext autoreload` and `%autoreload 2` magic in the document that’s loading the modules, it will [automatically reload them](https://stackoverflow.com/a/5399339/454773) when I call functions imported from them if I’ve made changes to the associated module file;
  • optionally pair the .py file with an .ipynb file, in which case the .ipynb file will be saved: a) with *all* cells run; b) include cell outputs.

Referring back to Daniele Procida’s insights about documentation, this ability to have code in a single document (for example, a .py file) that is executable in one environment (the notebook editing / development environment, for example) but not another (when loaded as a .py module) means we can start to write richer source code files.

I also wonder if this provides us with a way of bundling test code as part of the code development narrative? (I don’t use tests so don’t really know how the workflow goes…)

More general is the insight that we can use Jupytext to automatically generate distinct versions of a document from a single source document. The generated documents:

  • can include code outputs;
  • can *exclude* code outputs;
  • can have tagged code commented out in some document formats and not others.

I’m not sure if we can also use it in combination with other notebook extensions to hide particular cells, for example, when viewing documents in the notebook editor or generating export document formats from an executed notebook form of it. A good example to try out might be the hide_code extension, which provides a range of toolbar options that can be used to customise the display of a document in a the notebook editor or HTML / PDF documents generated from it.

It could also be useful to have a very simple extension that lets you click a toolbar button to set an active- state tag and style or highlight that cell in the notebook UI to mark it out as having limited execution status. A simple fork of, or extension to, the freeze extension would probably do that. (I note that Jupytext responds to the “frozen” freeze setting but that presumably locks out executing the cell in the notebook UI too?)

PS a few weeks ago, Jupytext creator Marc Wouts posted this handy recipe for *rewriting* notebook commits made to a git branch against markdown formatted documents rather than the original ipynb change commits: git filter-branch --tree-filter 'jupytext --to md */*.ipynb && rm -f */*.ipynb' HEAD This means that if you have a legacy project with commits made to notebook files, you can rewrite it as a series of changes made to markdown or Python document versions of the notebooks…