Fragment: Code Complexity in Notebooks — I’m Obvously Not Wily Enough

Following on from Thinking About Things That Might Be Autogradeable or Useful for Automated Marking Support, via Chris Holdgraf I get something else that might be worth considering both for profiling notebooks as well as assessing code.

The response came following an idle tweet I’d posted wondering “If folk can read 600wpm (so 10wps), what’s a reasonable estimate for reading/understanding code blocks eg in jupyter notebook?”; if you’re trying to make sense of a code chunk in a notebook, I’m minded to assume that the number of lines may have an effect, as well as the line length.

Context for this: I’ve started mulling over a simple tool to profile / audit our course notebooks to try to get a baseline for how long it might reasonably take for a student to work through them. We could instrument the notebooks (eg using the nbgoogleanalytics or jupyter-analytics extensions to inject Google Analytics tracking codes into notebooks) and collect data on how long it actually takes, but we don’t. And whilst our course compute environment is on my watch, we won’t (at least, not using a commercial analytics company, even if their service is “free”, even though it would be really interesting…). If we were to explore logging, it might be interesting to add something an open source analytics engine like Matomo (Piwik, as was) to the VM and let students log their own activity… Or maybe explore jupyter/telemetry collection with a local log analyser that students could look at…

So, Chris’ suggestion pointed me towards wily, “an application for tracking, reporting on timing and complexity in Python code”. Out of the can wily can be used to analyse and report on the code complexity of a git repo over a period of time. It also looks like it can cope with notebooks: Wily will detect and scan all Python code in .ipynb files automatically”. It also seems like there’s the ability to “disable reporting on individual cells*, so maybe I can get reports on a per notebook or per cell basis?

My requirement is much simpler than the evolution of the code complexity over time, however: I just want to run the code complexity tools over a single set of files, at one point in time, and generate reports on that. (Thinks: letting students plot the complexity of their code over time might be interesting, eg in a mini-project setting?) However, from the briefest of skims of the wily docs, I can’t fathom out how to do that (there is support for analysing across the current filesystem rather rather than a git repo, but that doesn’t seem to do anything for me… Is it looking to build a cache and search for diffs? I DON’T WANT A DIFF! ;-)

There is an associated blog post that builds up the rationale for wily here — Refactoring Python Applications for Simplicity — so maybe by reading through that and perhaps poking through the wily repo I will be able to find an easy way of using wily, somehow, to profile my notebooks…

But the coffee break break I gave myself to look at this and give it a spin has run out, so it’s consigned back to the back of the queue I’ve started for this side-project…

PS From a skim of the associated blog post, wily‘s not the tool I need: radon is, “a Python tool which computes various code metrics, including raw metrics (SLOC (single lines of code), comment lines, blank lines, etc.), Cyclomatic Complexity (i.e. McCabe’s Complexity), Halstead metrics (all of them), the Maintainability Index (a Visual Studio metric)”. So I’ll be bumping that to the head of the queue…

Thinking About Things That Might Be Autogradeable or Useful for Automated Marking Support

One of the the ideas we keep floating but never progressing is how we might make use of nbgrader. My feeling is we could start to make use of it now on an optional, individual tutor-marker basis. The current workflow is such that students submit assessments centrally and work is then sent to assigned markers; markers mark the work and then return it centrally, whence it is dispatched back to students.

Whilst there has been a recent procurement exercise looking at replacing the central assignment handling system, I doubt that nbgrader even featured as a side note; although it can be used to release work to students, collect it from then, manage it’s allocation to markers, etc, I suspect the chance is vanishingly small of the institution tolerating more than one assignment handling system, and I very much doubt that nbgrader would be that system.

Despite that, individual working is still a possibility and it requires the smallest of tweaks. Our data course currently distributes continuous assignments as Jupyter notebooks, and students have been encouraged to return their work as completed notebooks, although they may also return notebooks converted to Word docs, for example. So if we just marked up the notebook with each test cell marked as a manually graded assignment, or manually graded task, markers could individually decide to use the nbgrader tools to support their marking and feedback.

(We could also use the nbgrader system to generated the released-to-student notebooks and make sure we have stripped the answers out of them…Erm…)

When it comes to automated grading, lots of the questions we ask are not ideally suited to autograding, although with a few tweaks we could make them testable.

The nbgrader docs provides some good advice on writing good test cases, including examples of using mocking to help test whether functions were called or not called, as well as grading charts / plots using plotchecker.

As someone who doesn’t write tests, I started to explore for myself examples of things we can test for autograding and auto-feedback . Note the auto-feedback reference there: one of the things that started to interest me was not the extent to which we could use automated tests to generate a mark per se, but how we could use tests to provide more general and informative forms of feedback.

True, a score is a form of feedback, but quite a blunt one, and may suffer from false positives or, more likely, false negatives. So could we instead explore how tests can be used to provide more constructive feedback; cf the use of linters in this respect (for example, Nudging Student Coders into Conforming with the PEP8 Python Style Guide Using Jupyter Notebooks, flake8 and pycodestyle_magic Linters). And rather than using autograders as a be-all and end-all, could we use them as feedback generators and as a support tool for markers, making mark suggestions rather than official scores.

Once you start thinking about an autograder as a marker support tool, rather than a marker in its own right, it reduces the need for the marker to be right… that can be left to the judgement of the human marker. All that we would require is that it is mostly useful/helpful, or at least, more helpful/useful than it is a hindrance.

Here’s another example of how we might genearte useful feedback, this time as part of a grader that is capable of assigning partial credit: generating feedback on submitted charts.

As an example, I wrote up some notes on the crudest of marking support tools for marking free text answers against a specimen answer. I know very little about NLP (natural language processing) and even less about automated marking of free text answers, but I think I can see some utlity even with a crappy similarity matcher from an off-the-shelf NLP package (spacy).

PS in passing, I also noticed this tip for nbgrader autograding in a Docker container using envkernel, a tool that can wrap a docker container so you can launch it as a notebook kernel. (I haven’t managed to get this working yet; I didnlt spot a demo that “just works”, so I figure I need to actually read the docs, which I haven’t made time to do yet… So if you do have a baby steps example that does work, please share it via the comments… Or submit it as a PR to the official docs…)

Accessing MyBinder Kernels Remotely from IPython Magic and from VS Code

One of the issues facing us as a distance learning organisation is how to support the computing needs of distance learning students, on a cross-platform basis, and over a wide range of computer specifications.

The approach we have taken for our TM351 Data Management and Analysis course is to ship students a Virtualbox virtual machine. This mostly works. But in some cases it doesn’t. So in the absence of an institutional online hosted notebook, I started wondering about whether we could freeload on MyBinder as a way of helping students run the course software.

I’ve started working on an image here though it’s still divergent from the shipped VM (I need to sort out things like database seeding, and maybe fix some of the package versions…), but that leaves open the question of how students would then access the environment.

One solution would be to let students work on MyBinder directly, but this raises the question of how to get the course notebooks into the Binder environment (the notebooks are in a repo, but its a private repo) and out again at the end of a session. One solution might be to use a Jupyter github extension but this would require students setting up a Github repository, installing and configuring the extension, remember to sync (unless auto-save-and-commit is available, or could be added to the extendion) and so on…

An alternative solution would be to find a way of treating MyBinder like an Enterprise Gateway server, launching a kernel via MyBinder from a local notebook server extension. But I don’t know how to do that.

Some fragments I have had laying around for a bit were the first fumblings towards a Python Mybinder client API, based on the Sage Cell client for running a chunk of code on a remote server… So I wondered whether I could do another pass over that code to ceate some IPython magic that let you create a MyBinder environment from a repo and then execute code against it from a magicked code cell. Proof of concept code for that is here: innovationOUtside/ipython_binder_magic.

One problem is that the connection seems to time out quite quickly. The code is really hacky and could probably be rebuilt from functions in the Jupyter client package, but making sense of that code is beyond my limited cut-and-paste abilities. But: it does offer a minimal working demo of what such a thing could be like. At a push, a student could install a minimal Jupyter server on their machine, install the magic, and then write notebooks using magic to run the code against a Binder kernel, albeit one that keeps dying. Whilst this would be inconvenient, it’s not a complete catastrophe because the notebook would be bing saved to the student’s local machine.

Another alternative struck me today when I say that Yuvi Panda had posted to the official Jupyter blog a recipe on how to connect to a remote Jupyterhub from Visual Studio Code. The mechanics are quite simple — I posted a demo here about how to connect from VS Code to a remote Jupyter server running on Digital Ocean, and the same approach works for connecting to out VM notebook server, if you tweak the VM notebook server’s access permissions — but it requires you to have a token. Yuvi’s post says how to find that from a remote JupyterHub server, but can we find the token for a MyBinder server?

If you open your browser’s developer tools and watch the network traffic as you launch a MyBinder server, then you can indeed see the URL used to launch the environment, along with the necessary token:

But that’s a bit of a faff if we want students to launch a Binder environment, watch the newtrok traff, grab the token and then use that to create a connection to the Binder environment from VS Code.

Searching the contents of pages from a running Binder environment, it seems that the token is hidden in the page:

And it’s not that hard to find… it’s in the link from the Jupyter log. The URL needs a tiny bit of editing (cut the /tree path element) but then the URL is good to go as the kernel connection URL in VS Code:

Then you can start working on your notebook in VS Code (open a new notebook from the settings menu), executing the code against the MyBinder environment.

You can also see the notebooks listed in the remote MyBinder environment.

So that’s another way… and now it’s got me thinking… how hard would it be to write a VS Code extension to launch a MyBinder container and then connect to it?

Ps by the by, I notice that developer tools in Firefox become increasingly useful with the Firefox 71 release in the form of a websocket inspector.

This lets you inspect traffic sent across a webseocket connection. For example, if we force a page reload on a running Jupyter notebook, we can see a websocket connection:

We can the click on that connection and monitor the messages being passed over it…

I thought this might help me debug / improve my Binder magic, but it hasn’t. The notebook looks like it sends an empty ping as a heartbeat (as per the docs), but if I try to send an empyt message from the magic it closes the connection? Instead, I send a message to the hearbeat channel…

vagrant share – sharing a vagrant launched headless VM service on the public interwebz

Lest I forget (which I had…):

vagrant share lets you launch a VM using vagrant and share the environment using ngrok in three ways:

  • via public URLs (expose your http ports to the web, rather than locally);
  • via ssh;
  • via vagrant connect (connect to any exposed VM port from a remote location).

So this could be handy for remote support with students… If we tell them to install the vagrant share plugin, then we can offer remote support…

Tinkering With Neo4j and Cypher

I am so bored of tech at the moment — I just wish I could pluck up the courage to go into the garden and start working on it again (it was, after all, one of the reasons for buying the house we’ve been in for several years now, and months go by without me setting foot into it; for the third year in a row the apples and pears have gone to rot, except for the ones the neighbours go scrumping for…) Instead, I sit all day, every day, in front of a screen, hacking at a keyboard… and I f*****g hate it…

Anyway… here’s some of the stuff that I’ve been playing with yesterday and today, in part prompted by a tweet doing the rounds again on:

#Software #Analytics with #Jupyter notebooks using a prefilled #Neo4j database running on #MyBinder by @softvisresearch
Created with building blocks from @feststelltaste and @psychemedia
#knowledgegraph #softwaredevelopment
https://github.com/softvis-research/BeLL

Impact.

Yeah:-)

Anyway… It prompted me to revisit my binder-neo4j repo that demos how to launch a neo4j database in a MyBinder container tp provide some more baby steps ways in to actually getting started running queries.

So yesterday I added in a third party cypher kernel to the build, HelgeCPH/cypher_kernel that lets you write cypher queries in code cells; and today I hacked together some simple magic — innovationOUtside/cypher_magic — that lets you write cypher queries in block magic cells in a “normal” (python kernel) notebook. This magic really should be extended a bit more eg to allow connections to arbitrary neo4j databases, and perhaps crib from the cypher_kernel to include graph conversions to a networkx graph object format as well as graphical vidusalisations.

The cypher-kernel uses visjs, as does an earlier cypher magic that appears to have rotted (ipython-cypher). But if we can get the graph objects into a nx format, then we could also use netwulf to make pretty diagrams…

The tweet-linked repo also looks interesting (although I don’t speak German at all, so, erm…); there may be things I can also pull out of there to add to my binder-neo4j repo, although I may need to rethink that: the binder-neo4j repo had started out as a minimal template repo for just getting started with neo4j in MyBinder/repo2docker. But it’s started creeping… Maybe I should pare it back again, install the magic from its own repo, and but the demos in a more disposable place.

Sketches Around Transkribus – Handwritten Text Transcriptions in Jupyter Notebooks

Another strike day not reading as much as I’d hoped, I auto-distracted by having a play with the Transkribus Python API. (Transkribus, if you recall, is an app that supports transcription of hand-written texts.)

The API lets you pull (and push, but I haven’t got that far yet) documents from and to the Transkribus webservice. One of the docs you can export from it (which is also available from the GUI client) is an XML doc that includes co-ordinates for segmented line regions within each page:

You can export the document from the GUI…

…but there’s a Python API way of doing it too…

So, I made a few sketches (in a notebook in this gist) that started to explore the API, including pulling the XML down, along with page images, parsing it, and using OpenCV to crop individual text lines out of the page image scan.

I then popped a function together to create a simple markdown file containing each cropped line and any trasncript already added to it:

My thinking here is that I could use Jupytext to open the markdown document in a notebook interface and add further transcription text to a markdown doc / notebook containing separate text lines. There’s a Python API call for pushing stuff back to the server, so I hoping I should be able to come up with a simple script to transform the markdown, or perhaps even notebook ipynb/JSON derived using Jupytext from it, to the required XML format and push it back to the Transkribus server.

(You can see where I’m going here, perhaps? A simple notebook UI as an alternative to the more complex Transkribus UI.)

The next step, though, is to see if I can get the Transkribus service to find the text lines on a new page in a document already uploaded to the service and then pull the corresponding XML down; then see if I can upload a document to the service. (I also need to have a go at creating a document collection.) Then I’ll be able to thing a bit more about generating the XML I need to push a new, or updated, transcript back to the Transkribus service.

I should probably also try getting a config to run this in MyBinder, and working on a reproducible demo (the sketch uses a document I’ve uploaded and partially trasncribed, and I’m not sure how to go about sharing it, if indeed I can?)

Sketches Around The National Archives

I had intended to spend strike week giving my hands a rest, reading rather than keyboarding, but as it was I spent today code-sketching around the National Archives, as well as other things.

In trying to track down original Home Office papers relating to the Yorkshire Luddites, I’ve been poking around the National Archives (as described in passing here). Over the last couple of years, I’ve grown weary of search interfaces, even Advanced Search ones, preferring to try to grab the data into my own database(s) where I can more easily query and enrich it, as well as join it with other data sources.

I had assumed the National Archives search index was a bit richer than it is (I put down my lack of success in many searches I tried to unfamiliarlity with it) but it seems pretty thin – an index catalogue that indexes the existence of document collections but not what’s in them to any great level of detail.

But assuming there was rather more detail than I seem to have found, I did a few code sketches around it that demonstrate:

  • using mechanicalsoup to load a search page, set form selections, “click” a download burron and capture the result;
  • using StringIO to load CSV data into a pandas dataframe;
  • using spacy to annotate a data frame with named entities;
  • exploding lists in a data-frame column to make a long dataframe therefrom;
  • expanding a column of tuples in a dataframe across several columns;
  • using Wand (an Python API for imagemagick) to render pages from a PDF as images in a Jupyter notebook (Chrome is borked again, not rendering PDFs via a notebook IFrame).

Check the gist to see the code… (Bits of it should run in MyBinder too – just remember to select “Gist”! spacy isn’t installed at the moment — Gists seem to be a bit broken at the moment, the requirements.txt file is being mistreated, and I donlt want to risk breaking other bits as a side effect of trying to fix it. If Gists are other than temporarily borked, I will try to remember to add the code within this post explicilty.)