FInding the Path to a Jupyter Notebook Server Start Directory… Or maybe not…

For the notebook search engine I’ve been tinkering with, I want to be able to index notebooks rooted on the same directory path as a notebook server the search engine can be added to as a Jupyter server proxy extension. There doesn’t seem to be a reliably set or accessible environment variable containing this path, so how can we create one?

Here’s a recipe that I think may help: it uses the nbclient package to run a minimal notebook that just executes a simple, single %pwd command against the available Jupyter server.

import nbformat
from nbclient import NotebookClient

_nb =  '''{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pwd"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}'''

nb = nbformat.reads(_nb, as_version=nbformat.NO_CONVERT)

client = NotebookClient(nb, timeout=600)
# Available parameters include:
# kernel_name='python3'
# resources={'metadata': {'path': 'notebooks/'}})

client.execute()

path = nb['cells'][0]['outputs'][0]['data']['text/plain'].strip("'").strip('"')

Or maybe it doesn’t? Maybe it actually just runs in the directory you run the script from, in which case it’s just a labyrinthine pwd… Hmmm…

Fragment -Visualising Jupyter Notebook Structure

Over the weekend, I spent some time dabbling with generating various metrics over Jupyter notebooks (more about that in a later post…). One of the things I started looking at were tools for visualising notebook structure.

In the first instance, I wanted a simple tool to show the relative size of notebooks, as well as the size and placement of markdown and code cells within them.

The following is an example of a view over a simple notebook; the blue denotes a markdown cell, the pink a code cell, and the grey separates the cells. (The colour of the separator is controllable, as well as its size, which can be 0.)

When visualising multiple notebooks, we can also display the path to the notebook:

The code can be be found in this repo this gist.

The size of the cells in the diagram are determined as follows:

  • for markdown cells, the number of “screen lines” taken up by the markdown when presented on a screen with a specified screen character width;
        import textwrap
        LINE_WIDTH = 160
    
        def _count_screen_lines(txt, width=LINE_WIDTH):
            """Count the number of screen lines that an overflowing text line takes up."""
            ll = txt.split('\n')
            _ll = []
            for l in ll:
                #Model screen flow: split a line if it is more than `width` characters long
                _ll=_ll+textwrap.wrap(l, width)
            n_screen_lines = len(_ll)
            return n_screen_lines
    

  • for code cells, the number of lines of code; (long lines are counted over multiple lines as per markdown lines)

In parsing a notebook, we consider each cell in turn capturing its cell type and screen line length, returing a cell_map as a list of (cell_size, cell_type) tuples:

   import os
   import nbformat
   VIS_COLOUR_MAP  = {'markdown':'cornflowerblue','code':'pink'}

   def _nb_vis_parse_nb(fn):
        """Parse a notebook and generate the nb_vis cell map for it."""

        cell_map = []

        _fn, fn_ext = os.path.splitext(fn)
        if not fn_ext=='.ipynb' or not os.path.isfile(fn):
            return cell_map

        with open(fn,'r') as f:
            nb = nbformat.reads(f.read(), as_version=4)

        for cell in nb.cells:
            cell_map.append((_count_screen_lines(cell['source']), VIS_COLOUR_MAP[cell['cell_type']]))

        return cell_map

The following function handle single files or directory paths and generates a cell map for each notebook as required:

    def _dir_walker(path, exclude = 'default'):
        """Profile all the notebooks in a specific directory and in any child directories."""

        if exclude == 'default':
            exclude_paths = ['.ipynb_checkpoints', '.git', '.ipynb', '__MACOSX']
        else:
            #If we set exclude, we need to pass it as a list
            exclude_paths = exclude
        nb_multidir_cell_map = {}
        for _path, dirs, files in os.walk(path):
            #Start walking...
            #If we're in a directory that is not excluded...
            if not set(exclude_paths).intersection(set(_path.split('/'))):
                #Profile that directory...
                for _f in files:
                    fn = os.path.join(_path, _f)
                    cell_map = _nb_vis_parse_nb(fn)
                    if cell_map:
                        nb_multidir_cell_map[fn] = cell_map

        return nb_multidir_cell_map

The following function is used to grab the notebook file(s) and generate the visualisation:

def nb_vis_parse_nb(path, img_file='', linewidth = 5, w=20, **kwargs):
    """Parse one or more notebooks on a path."""
     
    if os.path.isdir(path):
        cell_map = _dir_walker(path)
    else:
        cell_map = _nb_vis_parse_nb(path)
        
    nb_vis(cell_map, img_file, linewidth, w, **kwargs)

So how is the visualisation generated?

A plotter function generates the plot from acell_map:

    import matplotlib.pyplot as plt

    def plotter(cell_map, x, y, label='', header_gap = 0.2):
        """Plot visualisation of gross cell structure for a single notebook."""

        #Plot notebook path
        plt.text(y, x, label)
        x = x + header_gap

        for _cell_map in cell_map:

            #Add a coloured bar between cells
            if y > 0:
                if gap_colour:
                    plt.plot([y,y+gap],[x,x], gap_colour, linewidth=linewidth)

                y = y + gap
            
            _y = y + _cell_map[0] + 1 #Make tiny cells slightly bigger
            plt.plot([y,_y],[x,x], _cell_map[1], linewidth=linewidth)

            y = _y

The gap can be automatically calculated relative to the longest notebook we’re trying to visualise which sets the visualisation limits:

    import math

    def get_gap(cell_map):
        """Automatically set the gap value based on overall length"""
        
        def get_overall_length(cell_map):
            """Get overall line length of a notebook."""
            overall_len = 0
            gap = 0
            for i ,(l,t) in enumerate(cell_map):
                #i is number of cells if that's useful too?
                overall_len = overall_len + l
            return overall_len

        max_overall_len = 0
        
        #If we are generating a plot for multiple notebooks, get the largest overall length
        if isinstance(cell_map,dict):
            for k in cell_map:
                _overall_len = get_overall_length(cell_map[k])
                max_overall_len = _overall_len if _overall_len > max_overall_len else max_overall_len
        else:
            max_overall_len = get_overall_length(cell_map)

        #Set the gap at 0.5% of the overall length
        return math.ceil(max_overall_len * 0.01)

The nb_vis() function takes the cell_map, either as a single cell map for a single notebook, or as a dict of cell maps for multiple notebooks, keyed by the notebook path:

def nb_vis(cell_map, img_file='', linewidth = 5, w=20, gap=None, gap_boost=1, gap_colour='lightgrey'):
    """Visualise notebook gross cell structure."""

    x=0
    y=0
    
    #If we have a single cell_map for a single notebook
    if isinstance(cell_map,list):
        gap = gap if gap is not None else get_gap(cell_map) * gap_boost
        fig, ax = plt.subplots(figsize=(w, 1))
        plotter(cell_map, x, y)
    #If we are plotting cell_maps for multiple notebooks
    elif isinstance(cell_map,dict):
        gap = gap if gap is not None else get_gap(cell_map) * gap_boost
        fig, ax = plt.subplots(figsize=(w,len(cell_map)))
        for k in cell_map:
            plotter(cell_map[k], x, y, k)
            x = x + 1
    else:
        print('wtf')
    ax.axis('off')
    plt.gca().invert_yaxis()
    
    if img_file:
        plt.savefig(img_file)

The function will render the plot in a Jupyter notebook, or can be called to save the visualisation to a file.

This was just done as a quick proof of concept, so comments welcome.

On the to do list is to create a simple CLI (command line interface) for it, as well as explore additional customisation support (eg allow the color types to be specified). I also need to account for other cell types. An optional legend explaining the colour map would also make sense.

On the longer to do list is a visualiser that supports within cell visualisation. For example, headers, paragraphs and code blocks in markdown cells; comment lines, empty lines, code lines, magic lines / blocks, shell command lines in code cells.

In OU notebooks, being able to identify areas associated with activities would also be useful.

Supporting the level of detail required in the visualisation may be be tricky, particulary in long notebooks. A vertical, multi-column format is probably best showing eg an approximate “screen’s worth” of content in a column then the next “scroll” down displayed in the next column along.

Something else I can imagine is a simple service that would let you pass a link to an online notebook and get a visulisation back, or a link to a Github repo that would give you a visualisation back of all the notebooks in the repo. This would let you embed a link to the visualisation, for example, in the repo README. On the server side, I guess this means something that could clone a repo, generate the visualisation and return the image. To keep the workload down, the service would presumably keep a hash of the repo and the notebooks within the repo, and if any of those had changed, regenerate the image, else re-use a cached one. (It might also make sense to cache images at a notebook level to save having to reparse all the notebooks in a repo where only a single notebook has changed, and then merge those into a single output image?)

PS this has also go me thinking about simple visualisers over XML materials too… I do have an OU-XML to ipynb route (as well as OU-XML2md2html, for example), but a lot of the meaningful structure from the OU-XML would get lost on a trivial treatment (eg activity specifications, mutlimedia use, etc). I wonder if it’d make more sense to create an XSLT to generate a summary XML document and then visualise from that? Or create Jupytext md with lots of tags (eg tagging markdown cells as activities etc) that could be easily parsed out in a report? Hmmm… now that may make a lot more sense…

Exploring Jupytext – Creating Simple Python Modules Via a Notebook UI

Although I spend a lot of my coding time in Jupyter notebooks, there are several practical problems associated with working in that environment.

One problem is that under version control, it can be hard to tell what’s changed. For one thing, the notebook .ipynb format, which saves as a serialised JSON object, is hard to read cleanly:

The .ipynb format also records changes to cell execution state, including cell execution count numbers and changes to cell outputs (which may take the form of large encoded strings when a cell output is an image, or chart, for example:

Another issue arises when trying to write modules in a notebook that can be loaded into other notebooks.

One workaround for this is to use the notebook loading hack described in the official docs: Importing notebooks. This requires loading in a notebook loading module that then allows you to import other modules. Once the notebook loader module is installed, you can run things like:

  • import mycode as mc to load mycode.ipynb
  • `moc = __import__(“My Other Code”)` to load code in from `My Other Code.ipynb`

If you want to include code that can run in the notebook, but that is not executed when the notebook is loaded as a module, you can guard items in the notebook:

In this case, the if __name__=='__main__': guard will run the code in the code cell when run in the notebook UI, but will not run it when the notebook is loaded as a module.

Guarding code can get very messy very quickly, so is there an easier way?

And is there an easier way of using notebooks more generally as an environment for creating code+documentation files that better meet the needs of a variety of users? For example, I note this quote from Daniele Procida recently shared by Simon Willison:

Documentation needs to include and be structured around its four different functions: tutorials, how-to guides, explanation and technical reference. Each of them requires a distinct mode of writing. People working with software need these four different kinds of documentation at different times, in different circumstances—so software usually needs them all.

This suggests a range of different documentation styles for different purposes, although I wonder if that is strictly necessary?

When I am hacking code together, I find that I start out by writing things a line at a time, checking the output for each line, then grouping lines in a single cell and checking the output, then wrapping things in a function (for example of this in practice, see Programming in Jupyter Notebooks, via the Heavy Metal Umlaut). I also try to write markdown notes that set up what I intend to do (and why) in the following code cells. This means my development notebooks tell a story (of a sort) of the development of the functions that hopefully do what I actually want them to by the end of the notebook.

If truth be told, the notebooks often end up as an unholy mess, particularly if they are full of guard statements that try to separate out development and testing code from useful code blocks that I might want to import elsewhere.

Although I’ve been watching it for months, I’ve only started exploring how to use Jupytext in practice quite recently, and already it’s starting to change how I use notebooks.

If you install jupytext, you will find that if you click on a link to a markdown (.md)) or Python (.py), or a whole range of other text document types (.py, .R, .r, .Rmd, .jl, .cpp, .ss, .clj, .scm, .sh, .q, .m, .pro, .js, .ts, .scala), you will open the file in a notebook environment.

You can also open the file as a .py file, from the notebook listing menu by selecting the notebook:

and then using the Edit button to open it:

at which point you are presented with the “normal” text file editor:

One thing to note about the notebook editor view over the notebook is that you can also include markdown cells, as you might in any other notebook, and run code cells to preview their output inline within the notebook view.

However, whilst the markdown code will be saved into the Python file (as commented out code), the code outputs will not be saved into the Python file.

If you do want to be able to save notebook views with any associated code output, you can configure Jupytext to “pair” .py and .ipynb files (and other combinations, such as .py, .ipynb and .md files) such that when you save an open .py or .ipynb file from the notebook editing environment, a “paired” .ipynb or .py version of the file is also saved at the same time.

This means I could click to open my .py file in the notebook UI, run it, then when I save it, a “simple” .py file containing just code and commented out markdown is saved along with a notebook .ipynb file that also contains the code cell outputs.

You can configure Jupytext so that the pairing only works in particular directories. I’ve started trying to explore various settings in the branches of this repo: ouseful-template-repos/jupytext-md. You can also convert files on the command line; for example, <span class="s1">jupytext --to py Required\ Pace.ipynb will convert a notebook file to a python file.

The ability to edit Python / .py files, or code containing markdown / .md files in a notebook UI, is really handy, but there’s more…

Remember the guards?

If I tag a code cell using the notebook UI (from the notebook View menu, select Cell Toolbar and then Tags, you can tag a cell with a tag of the form active-ipynb:

See the Jupytext docs: importing Jupyter notebooks as modules for more…

The tags are saved as metadata in all document types. For example, in an .md version of the notebook, the metadata is passed in an attribute-value pair when defining the language type of a code block:

In a .py version of the notebook, however, the tagged code cell is not rendered as a code cell, it is commented out:

What this means is that I can tag cells in the notebook editor to include them — or not — as executable code in particular document types.

For example, if I pair .ipynb and .py files, whenever I edit either an .ipynb or .py file in the notebook UI, it also gets saved as the paired document type. Within the notebook UI, I can execute all the code cells, but through using tagged cells, I can define some cells as executable in one saved document type (.ipynb for example) but not in another (a .py file, perhaps).

What that in turn means is that when I am hacking around with the document in the notebook UI I can create documents that include all manner of scraggy developmental test code, but only save certain cells as executable code into the associated .py module file.

The module workflow is now:

  • install Jupytext;
  • edit Python files in a notebook environment;
  • run all cells when running in the notebook UI;
  • mark development code as active-ipynb, which is to say, it is *not active* in a .py file;
  • load the .py file in as a module into other modules or notebooks but leaving out the commented out the development code; if I use `%load_ext autoreload` and `%autoreload 2` magic in the document that’s loading the modules, it will [automatically reload them](https://stackoverflow.com/a/5399339/454773) when I call functions imported from them if I’ve made changes to the associated module file;
  • optionally pair the .py file with an .ipynb file, in which case the .ipynb file will be saved: a) with *all* cells run; b) include cell outputs.

Referring back to Daniele Procida’s insights about documentation, this ability to have code in a single document (for example, a .py file) that is executable in one environment (the notebook editing / development environment, for example) but not another (when loaded as a .py module) means we can start to write richer source code files.

I also wonder if this provides us with a way of bundling test code as part of the code development narrative? (I don’t use tests so don’t really know how the workflow goes…)

More general is the insight that we can use Jupytext to automatically generate distinct versions of a document from a single source document. The generated documents:

  • can include code outputs;
  • can *exclude* code outputs;
  • can have tagged code commented out in some document formats and not others.

I’m not sure if we can also use it in combination with other notebook extensions to hide particular cells, for example, when viewing documents in the notebook editor or generating export document formats from an executed notebook form of it. A good example to try out might be the hide_code extension, which provides a range of toolbar options that can be used to customise the display of a document in a the notebook editor or HTML / PDF documents generated from it.

It could also be useful to have a very simple extension that lets you click a toolbar button to set an active- state tag and style or highlight that cell in the notebook UI to mark it out as having limited execution status. A simple fork of, or extension to, the freeze extension would probably do that. (I note that Jupytext responds to the “frozen” freeze setting but that presumably locks out executing the cell in the notebook UI too?)

PS a few weeks ago, Jupytext creator Marc Wouts posted this handy recipe for *rewriting* notebook commits made to a git branch against markdown formatted documents rather than the original ipynb change commits: git filter-branch --tree-filter 'jupytext --to md */*.ipynb && rm -f */*.ipynb' HEAD This means that if you have a legacy project with commits made to notebook files, you can rewrite it as a series of changes made to markdown or Python document versions of the notebooks…

Styling Python and SQL Code in Jupyter Notebooks

One of the magics we use in the TM351 Jupyter notebooks is the ipython-sql magic that lets you create a connection to a database server (in our case, a PostgreSQL database) and then run queries on it:

Whilst we try to use consistent code styling across the notebooks, such as capitalisation of SQL reserved words (SELECT, FROM, WHERE etc), sometimes inconsistencies can crop in. (The same is true when formatting Python code.)

One of the notebook extensions can help in this respect: code prettifier. This extension allows you to style one or all code cells in a notebook using a templated recipe:

The following snippet applies to Python cells and will apply the yapf Python code formatter to Python code cells by default, or the sqlparse SQL code formatter if the cell starts with a %%sql block magic. (It needs a bit more work to cope with %sql line magic, in which case the Python formatter needs to be applied first and then the SQL formatter applied from the start of the %sql line magic to the end of the line.

#"library":'''
import json
def code_reformat(cell_text):
    import yapf.yapflib.yapf_api
    import sqlparse
    import re
    comment = '--%--' if cell_text.startswith('%%sql') else '#%#'
    cell_text = re.sub('^%', comment, cell_text, flags=re.M)
    reformated_text = yapf.yapflib.yapf_api.FormatCode(cell_text)[0] if comment=='#%#' else sqlparse.format(cell_text, keyword_case='upper')
    return re.sub('^{}'.format(comment), '%', reformated_text, flags=re.M)
#''',
#"prefix":"print(json.dumps(code_reformat(u",
#"postfix": ")))"

Or as a string:

"python": {\n"library": "import json\ndef code_reformat(cell_text):\n import yapf.yapflib.yapf_api\n import sqlparse\n import re\n comment = '--%--' if cell_text.startswith('%%sql') else '#%#'\n cell_text = re.sub('^%', comment, cell_text, flags=re.M)\n reformated_text = yapf.yapflib.yapf_api.FormatCode(cell_text)[0] if comment=='#%#' else sqlparse.format(cell_text, keyword_case='upper')\n return re.sub('^{}'.format(comment), '%', reformated_text, flags=re.M)",
"prefix": "print(json.dumps(code_reformat(u",
"postfix": ")))"\n}

On my to do list is to find a way of running the code prettifier over notebooks from the command line using nbconvert. If you have a snippet that shows how to do that, please share via the comments:-)

PS clunky, but this sort of handles the line magic?

import json

def code_reformat(cell_text):
    import yapf.yapflib.yapf_api
    import sqlparse
    import re

    def sqlmatch(match):
        return '%sql'+sqlparse.format(match.group(1), keyword_case='upper')

    comment = '--%--' if cell_text.startswith('%%sql') else '#%#'
    cell_text = re.sub('^%', comment, cell_text, flags=re.M)
    reformatted_text = yapf.yapflib.yapf_api.FormatCode(cell_text)[0] if comment=='#%#' else sqlparse.format(cell_text, keyword_case='upper', identifier_case='upper')
    reformatted_text = re.sub('^{}'.format(comment), '%', reformatted_text, flags=re.M)
    if not cell_text.startswith('%%sql'):
        reformatted_text=re.sub('%sql(.*)',sqlmatch , reformatted_text, re.MULTILINE)
    return reformatted_text

NB the sqlparse function doesn’t seem to handle functions (eg count(*)) [bug?] but this horrible workaround hack to substitute for sqlparse.format() may provide a stop gap?

#replace sqlparse.format with sqlhackformat() defined as follows:
def sqlhackformat(sql):
    #Escape the brackets, parse, then unescape the brackets
    return re.sub(r'\\(.)', r'\1', sqlparse.format(re.escape(sql), keyword_case='upper', identifier_case= 'upper'))

If there is no semi-colon at the end of the final statement, we could handle any extra white space and add one with something like: s  = '{};'.format(s.strip()) if not s.strip().endswith(';') else s.strip().

Fragment – Jupyter For Edu

With more and more core components, as well as user contibutions, being added to the Jupyter framework, I’m starting to lose track of what’s possible. One of the things I might be useful for the OU, and Institute of Coding, context is to explore various architectural patterns that can be constructed in a Jupyter mediated environment that are particular useful for education.

In advance of getting a Github repo / wiki together to start that, here are a few fragments my my feeds, several of which have appeared in just the last couple of days:

Jupyter Enterprise Gateway Now a Top Level Jupyter Project

Via the Jupyter blog, I see the Jupyter Enterprise Gateway is now a top-level Jupyter project.

The Jupyter Enterprise Gateway “enables Jupyter Notebook to launch remote kernels in a distributed cluster“, which provides a handy separation between a notebook server (or Jupyterhub multi-user notebook server) and the kernel that a notebook runs against. For example, Jupyter Enterprise Gateway can be used to create kernels in a scaleable way using Kubernetes, or (I’m guessing…?) to do things like launch remote kernels running on a GPU cluster. From the docs it looks like Jupyter Enterprise Gateway  should work in a Jupyterhub context, although I can’t offhand find a simple howto / recipe for how to do that. (Presumably, Jupyterhub creates and launches user specific notebook server containers, and these then create and connect to arbitrary kernel running back-ends via the Jupyter Enterprise Gateway? Here’s a related issue I found.)

Running Notebook Cells One at a Time in a Terminal

The ever productive Doug Blank has a recipe for stepping through notebook cells in a terminal [code: nbplayer]. The player launches an IPython terminal that displays the first cell in the notebook and lets you step through them (executing or skipping the cell) one at a time. You can also run your own commands in between stepping through the notebook cells.

I can imagine using this to create a fixed set of steps for an activity that I want a student to work through, whilst giving them “free time” to explore the state of current execution environment, for example, or try out particular “given” functions with different parameters. This approach also provides a workaround for using notebook authored exercises in the terminal environment, which I know some colleagues favour over the notebook environment.

On my to do list is recast some of the activities from the new TM112 course to see how they feel using this execution model, and then compare that to the original activity and the activity run using the same notebook in a notebook environment.

Adding Multiple Student Users to a Jupyterhub Environment

Also via Doug Blank, a recipe for adding multiple users to a Jupyterhub environment using a form that allows you to simply add a list of user names: a more flexible way of adding accounts to Jupyterhub. User account details and random passwords are created automatically and then emailed to students.

To allow users to change passwords, e.g. on first run, I think the NotebookApp.allow_password_change=True notebook server parameter (Jupyter notebook – Config file and command line options) allows that?

The repo also shows a way of bundling nbviewer to allow users to “publish” HTML versions of their notebooks.

Doug also points to yuvipanda/jupyterhub-firstuseauthenticator, a first use authenticator for Jupyterhub that allows new users to create an account and then set a password on it. This could be really handy for workshops, where you want to allow uses to self-serve an environment that persists over a couple of workshop sessions, for example. (One thing we still need to do in the OU is get a Jupyterhub server up and running with persistent user storage; for TM112, we ran a temporary notebook server, which meant students couldn’t save and return to notebooks on the server – they’d have to download notebooks and then re-upload them into a new session if they wanted to return to working on a notebook they had modified. That said, the activity was designed as a “displosable” activity…)

Zip All Notebooks

This handy extension — nbzipprovides a button to zip and download a Jupyter notebook server folder.  If you’re working on a temporary notebook server, this provides and easy way of grabbing all the notebooks in one go. What might be even nicer would be to select a sub-folder, or selected set of files, using checkbox selectors? I’m not sure if there’s a complementary tool that will let you upload a zipped archive and unpack it in one go?

Jupyter Notebooks, Cognitive Tools and Philosophical Instruments

A placeholder post, as much as anything, to mark the AJET Call for Papers for a Special Issue on Re-Examining Cognitive Tools: New Developments, New Perspectives, and New Opportunities for Educational Technology Research as a foil for thinking about what Jupyter notebooks might be good for.

According the the EduTech Wiki, “[c]ognitive tools refer to learning with technology (as opposed to learning through technology)” which doesn’t really makes sense as as sentence and puts me off the idea of ed-tech academe straight away.

In the sense that cognitive tools support a learning process, I think they can do so in several ways. For example, in Programming in Jupyter Notebooks, via the Heavy Metal Umlaut I remarked on several different ways in which the same programme could be constructed within a notebook, each offering a different history and each representing a differently active approach to code creation and programming problem solving.

One of the things I try to do is reflect on my own practice, as I have been doing recently whilst trying to rework fragments of some OpenLearn materials as reproducible educational resources (which is to say, materials that generate their own resources and as such support reuse with modification more generally than many educational resources).

For example, consider the notebook at https://notebooks.azure.com/OUsefulInfo/libraries/gettingstarted/html/3.6.0%20Electronics.ipynb

You can also run the notebook interactively; sign in to Azure notebooks (if you’re OU staff, you can use your staff OUCU/OU password credentials) and clone my Getting Started library into your workspace. If notebooks are new to you, check out the 1.0 Using Jupyter Notebooks in Teaching and Learning - READ ME FIRST.ipynb notebook.

In creating the electronics notebook, I had to learn a chunk of stuff (the lcapy package is new to me and I had to get my head round circuitikz) but I found trying to figure out how to make examples related to the course materials provide a really useful context for giving me things to try to do with the package. In that the sense, the notebook was a cognitive tool (I guess) that supported my learning about lcapy.

For the https://notebooks.azure.com/OUsefulInfo/libraries/gettingstarted/html/1.05%20Simple%20Maths%20Equations%20and%20Notation.ipynb notebook, I had to start getting my head round sympy and on the way cobble together bits and pieces of code that might be useful when trying to produce maths related materials in a reproducible way. (For example, creating equations in sympy that can then be rendered, manipulated and solved throughout the materials in a way that’s appropriate for a set of educational (that is, teaching and/or learning) resources.

Something else that came to mind is that the notebook medium as both an authoring medium and a delivery medium (we can use it just to create assets; or we can also use it deliver content to students) changes the sorts of things you might want to do in the teaching. For example, I had the opportunity to create self test functions, and there is the potential for interactives that let students explore the effect of changing component values in a circuit. (We could also plot responses over a range of variable values, but I haven’t demoed that yet.) In a sense, the interactive affordances of the medium encouraged me to think of opportunities to create philosophical instruments that allow authors – as well as students – to explore the phenomena being described by the materials. Although not a chemistry educator, putting together a reworking of some OpenLearn chemistry materials – https://notebooks.azure.com/OUsefulInfo/libraries/gettingstarted/html/3.1.2%20OpenLearn%20Chemistry%20Demos.ipynb – gave me some ideas about the different ways in which the materials could be worked up to support interactive / self-checking /constructive learning use. (That is, ways in which we could present the notebooks as interactive cognitive tools to support the learning process on the one hand, or as philosophical instruments that would allow the learner explore the subject matter in an investigative and experimental way.)

I like to think the way I’m using the Jupyter notebooks as part of an informal “reproducible-OER” exploration is in keeping with some of the promise of live authoring using the OU’s much vaiunted, though still to be released, OpenCreate authoring environment (at least, as I understand the sorts of thing it is supposed to be able to support) with the advantage of being available now.

It’s important to recognise that Jupyter notebooks can be thought of as a medium that behaves in several ways. In the first case, it’s a rich authoring medium to work with – you can create things in it and for it, for example in the form of interactive widgets or reusable components such as IPython magics (for example, this interactive mapping magic: https://github.com/psychemedia/ipython_magic_folium ). Secondly, it’s a medium qua environment that can itself be extended and customised through the enabling and disabling of notebook extensions, such as ones that support WYSIWYG markdown editing, or hidden, frozen and read-only executable cells, which can be used to constrain the ways in which learners use some of the materials, perhaps as a counterpoint to getting them to engage more actively in editing other cells. Thirdly, it acts as a delivery medium, presenting content to readers who can engage with the content in an interactive way.

I’m not sure if there are any good checklists of what makes a “cognitive tool” or a “philosophical instrument”, but if there are it’d be interesting to try to check Jupyter notebooks off against them…

Importing Functions From DevTesting Jupyter Notebooks

One of the ways I use Jupyter notebooks is as sketchbooks in which some code cells are used to develop useful functions and other are used as “in-passing” develop’n’test cells that include code fragments on the way to becoming useful as part of a larger function.

Once a function has been developed, it can be a pain getting it into a form where I can use it in other notebooks. One way is to copy the function code into a separate python file that can be imported into another notebook, but if the function code needs updating, this means changing it in the python file and the documenting notebook, which can lead to differences arising between the two versions of the function.

Recipes such as Importing Jupyter Notebooks as Modules provide a means for importing the contents of a notebook as a module, but they do so by executing all code cells.

So how can we get round this, loading – and executing – just the “exportable” cells, such as the ones containing “finished” functions, and ignoring the cruft?

I was thinking it might be handy to define some code cell metadata (‘exportable’:boolean, perhaps), that I could set on a code cell to say whether that cell was exportable as a notebook-module function or just littering a notebook as a bit of development testing.

The notebook-as-module recipe would then test to see whether a notebook cell was not just a code cell, but an exportable code cell, before running it. The metadata could also hook into a custom template that could export the notebook as python with the code cells set to exportable:False commented out.

But this is overly complicating and hides the difference between exportable and extraneous code cells in the metadata field. Because as Johannes Feist pointed out to me in the Jupyter Google group, we can actually use a feature of the import recipe machinery to mask out the content of certain code cells. As Johannes suggested:

what I have been doing for this case is the “standard” python approach, i.e., simply guard the part that shouldn’t run upon import with if __name__=='__main__': statements. When you execute a notebook interactively, __name__ is defined as '__main__', so the code will run, but when you import it with the hooks you mention, __name__is set to the module name, and the code behind the if doesn’t run.

Johannes also comments that “Of course it makes the notebook look a bit more ugly, but it works well, allows to develop modules as notebooks with included tests, and has the advantage of being immediately visible/obvious (as opposed to metadata).”

In my own workflow, I often make use of the ability to display as code cell output whatever value is returned from the last item in a code cell. Guarding code with the if statement prevents the output of the last code item in the guarded block from being displayed. However, passing a variable to the display() function as the last line of the guarded block displays the output as before.

Charts_-_Split_Sector_Delta

So now I have a handy workflow for writing sketch notebooks containing useful functions + cruft from which I can just load in the useful functions into another notebook. Thanks, Johannes :-)

PS see also this hint from Doug Blank about building up a class across several notebook code cells:

Cell 1:

class MyClass():
def method1(self):
print("method1")

Cell 2:

class MyClass(MyClass):
def method2(self):
print("method2")

Cell 3:

instance = MyClass()
instance.method1()
instance.method2()

(WordPress really struggles with: a) markdown; b) code; c) markdown and code.)

See also: https://github.com/ipython/ipynb

PPS this also looks related but I haven’t tried it yet: https://github.com/deathbeds/importnb

Programming, meh… Let’s Teach How to Write Computational Essays Instead

From Stephen Wolfram, a nice phrase to describe the sorts of thing you can create using tools like Jupyter notebooks, Rmd and Mathematica notebooks: computational essays that complements the “computational narrative” phrase that is also used to describe such documents.

Wolfram’s recent blog post What Is a Computational Essay?, part essay, part computational essay,  is primarily a pitch for using Mathematica notebooks and the Wolfram Language. (The Wolfram Language provides computational support plus access to a “fact engine” database that ca be used to pull factual information into the coding environment.)

But it also describes nicely some of the generic features of other “generative document” media (Jupyter notebooks, Rmd/knitr) and how to start using them.

There are basically three kinds of things [in a computational essay]. First, ordinary text (here in English). Second, computer input. And third, computer output. And the crucial point is that these three kinds of these all work together to express what’s being communicated.

In Mathematica, the view is something like this:


In Jupyter notebooks:

In its raw form, an RStudio Rmd document source looks something like this:

A computational essay is in effect an intellectual story told through a collaboration between a human author and a computer. …

The ordinary text gives context and motivation. The computer input gives a precise specification of what’s being talked about. And then the computer output delivers facts and results, often in graphical form. It’s a powerful form of exposition that combines computational thinking on the part of the human author with computational knowledge and computational processing from the computer.

When we originally drafted the OU/FutureLearn course Learn to Code for Data Analysis (also available on OpenLearn), we wrote the explanatory text – delivered as HTML but including static code fragments and code outputs – as a notebook, and then ‘ran” the notebook to generate static HTML (or markdown) that provided the static course content. These notebooks were complemented by actual notebooks that students could work with interactively themselves.

(Actually, we prototyped authoring both the static text, and the elements to be used in the student notebooks, in a single document, from which the static HTML and “live” notebook documents could be generated: Authoring Multiple Docs from a Single IPython Notebook. )

Whilst the notion of the computational essay as a form is really powerful, I think the added distinction between between generative and generated documents is also useful. For example, a raw Rmd document of Jupyter notebook is a generative document that can be used to create a document containing text, code, and the output generated from executing the code. A generated document is an HTML, Word, or PDF export from an executed generative document.

Note that the generating code can be omitted from the generated output document, leaving just the text and code generated outputs. Code cells can also be collapsed so the code itself is hidden from view but still available for inspection at any time:

Notebooks also allow “reverse closing” of cells—allowing an output cell to be immediately visible, even though the input cell that generated it is initially closed. This kind of hiding of code should generally be avoided in the body of a computational essay, but it’s sometimes useful at the beginning or end of an essay, either to give an indication of what’s coming, or to include something more advanced where you don’t want to go through in detail how it’s made.

Even if notebooks are not used interactively, they can be used to create correct static texts where outputs that are supposed to relate to some fragment of code in the main text actually do so because they are created by the code, rather than being cut and pasted from some other environment.

However, making the generative – as well as generated – documents available means readers can learn by doing, as well as reading:

One feature of the Wolfram Language is that—like with human languages—it’s typically easier to read than to write. And that means that a good way for people to learn what they need to be able to write computational essays is for them first to read a bunch of essays. Perhaps then they can start to modify those essays. Or they can start creating “notes essays”, based on code generated in livecoding or other classroom sessions.

In terms of our own learnings to date about how to use notebooks most effectively as part of a teaching communication (i.e. as learning materials), Wolfram seems to have come to many similar conclusions. For example, try to limit the amount of code in any particular code cell:

In a typical computational essay, each piece of input will usually be quite short (often not more than a line or two). But the point is that such input can communicate a high-level computational thought, in a form that can readily be understood both by the computer and by a human reading the essay.

...

So what can go wrong? Well, like English prose, can be unnecessarily complicated, and hard to understand. In a good computational essay, both the ordinary text, and the code, should be as simple and clean as possible. I try to enforce this for myself by saying that each piece of input should be at most one or perhaps two lines long—and that the caption for the input should always be just one line long. If I’m trying to do something where the core of it (perhaps excluding things like display options) takes more than a line of code, then I break it up, explaining each line separately.

It can also be useful to "preview" the output of a particular operation that populates a variable for use in the following expression to help the reader understand what sort of thing that expression is evaluating:

Another important principle as far as I’m concerned is: be explicit. Don’t have some variable that, say, implicitly stores a list of words. Actually show at least part of the list, so people can explicitly see what it’s like.

In many respects, the computational narrative format forces you to construct an argument in a particular way: if a piece of code operates on a particular thing, you need to access, or create, the thing before you can operate on it.

[A]nother thing that helps is that the nature of a computational essay is that it must have a “computational narrative”—a sequence of pieces of code that the computer can execute to do what’s being discussed in the essay. And while one might be able to write an ordinary essay that doesn’t make much sense but still sounds good, one can’t ultimately do something like that in a computational essay. Because in the end the code is the code, and actually has to run and do things.

One of the arguments I've been trying to develop in an attempt to persuade some of my colleagues to consider the use of notebooks to support teaching is the notebook nature of them. Several years ago, one of the en vogue ideas being pushed in our learning design discussions was to try to find ways of supporting and encouraging the use of "learning diaries", where students could reflect on their learning, recording not only things they'd learned but also ways they'd come to learn them. Slightly later, portfolio style assessment became "a thing" to consider.

Wolfram notes something similar from way back when...

The idea of students producing computational essays is something new for modern times, made possible by a whole stack of current technology. But there’s a curious resonance with something from the distant past. You see, if you’d learned a subject like math in the US a couple of hundred years ago, a big thing you’d have done is to create a so-called ciphering book—in which over the course of several years you carefully wrote out the solutions to a range of problems, mixing explanations with calculations. And the idea then was that you kept your ciphering book for the rest of your life, referring to it whenever you needed to solve problems like the ones it included.

Well, now, with computational essays you can do very much the same thing. The problems you can address are vastly more sophisticated and wide-ranging than you could reach with hand calculation. But like with ciphering books, you can write computational essays so they’ll be useful to you in the future—though now you won’t have to imitate calculations by hand; instead you’ll just edit your computational essay notebook and immediately rerun the Wolfram Language inputs in it.

One of the advantages that notebooks have over some other environments in which students learn to code is that structure of the notebook can encourage you to develop a solution to a problem whilst retaining your earlier working.

The earlier working is where you can engage in the minutiae of trying to figure out how to apply particular programming concepts, creating small, playful, test examples of the sort of the thing you need to use in the task you have actually been set. (I think of this as a "trial driven" software approach rather than a "test driven* one; in a trial,  you play with a bit of code in the margins to check that it does the sort of thing you want, or expect, it to do before using it in the main flow of a coding task.)

One of the advantages for students using notebooks is that they can doodle with code fragments to try things out, and keep a record of the history of their own learning, as well as producing working bits of code that might be used for formative or summative assessment, for example.

Another advantage is that by creating notebooks, which may include recorded fragments of dead ends when trying to solve a particular problem, is that you can refer back to them. And reuse what you learned, or discovered how to do, in them.

And this is one of the great general features of computational essays. When students write them, they’re in effect creating a custom library of computational tools for themselves—that they’ll be in a position to immediately use at any time in the future. It’s far too common for students to write notes in a class, then never refer to them again. Yes, they might run across some situation where the notes would be helpful. But it’s often hard to motivate going back and reading the notes—not least because that’s only the beginning; there’s still the matter of implementing whatever’s in the notes.

Looking at many of the notebooks students have created from scratch to support assessment activities in TM351, it's evident that many of them are not using them other than as an interactive code editor with history. The documents contain code cells and outputs, with little if any commentary (what comments there are are often just simple inline code comments in a code cell). They are barely computational narratives, let alone computational essays; they're more of a computational scratchpad containing small code fragments, without context.

This possibly reflects the prior history in terms of code education that students have received, working "out of context" in an interactive Python command line editor, or a traditional IDE, where the idea is to produce standalone files containing complete programmes or applications. Not pieces of code, written a line at a time, in a narrative form, with example output to show the development of a computational argument.

(One argument I've heard made against notebooks is that they aren't appropriate as an environment for writing "real programmes" or "applications". But that's not strictly true: Jupyter notebooks can be used to define and run microservices/APIs as well as GUI driven applications.)

However, if you start to see computational narratives as a form of narrative documentation that can be used to support a form of literate programming, then once again the notebook format can come in to its own, and draw on styling more common in a text document editor than a programming environment.

(By default, Jupyter notebooks expect you to write text content in markdown or markdown+HTML, but WYSIWYG editors can be added as an extension.)

Use the structured nature of notebooks. Break up computational essays with section headings, again helping to make them easy to skim. I follow the style of having a “caption line” before each input. Don’t worry if this somewhat repeats what a paragraph of text has said; consider the caption something that someone who’s just “looking at the pictures” might read to understand what a picture is of, before they actually dive into the full textual narrative.

As well as allowing you to create documents in which the content is generated interactively - code cells can be changed and re-run, for example - it is also possible to embed interactive components in both generative and generated documents.

On the one hand, it's quite possible to generate and embed an interactive map or interactive chart that supports popups or zooming in a generated HTML output document.

On the other, Mathematica and Jupyter both support the dynamic creation of interactive widget controls in generative documents that give you control over code elements in the document, such as sliders to change numerical parameters or list boxes to select categorical text items. (In the R world, there is support for embedded shiny apps in Rmd documents.)

These can be useful when creating narratives that encourage exploration (for example, in the sense of  explorable explantations, though I seem to recall Michael Blastland expressing concern several years ago about how ineffective interactives could be in data journalism stories.

The technology of Wolfram Notebooks makes it straightforward to put in interactive elements, like Manipulate, [interact/interactive in Jupyter notebooks] into computational essays. And sometimes this is very helpful, and perhaps even essential. But interactive elements shouldn’t be overused. Because whenever there’s an element that requires interaction, this reduces the ability to skim the essay."

I've also thought previously that interactive functions are a useful way of motivating the use of functions in general when teaching introductory programming. For example, An Alternative Way of Motivating the Use of Functions?.

One of the issues in trying to set up student notebooks is how to handle boilerplate code that is required before the student can create, or run, the code you actually want them to explore. In TM351, we preload notebooks with various packages and bits of magic; in my own tinkerings, I'm starting to try to package stuff up so that it can be imported into a notebook in a single line.

Sometimes there’s a fair amount of data—or code—that’s needed to set up a particular computational essay. The cloud is very useful for handling this. Just deploy the data (or code) to the Wolfram Cloud, and set appropriate permissions so it can automatically be read whenever the code in your essay is executed.

As far as opportunities for making increasing use of notebooks as a kind of technology goes, I came to a similar conclusion some time ago to Stephen Wolfram when he writes:

[I]t’s only very recently that I’ve realized just how central computational essays can be to both the way people learn, and the way they communicate facts and ideas. Professionals of the future will routinely deliver results and reports as computational essays. Educators will routinely explain concepts using computational essays. Students will routinely produce computational essays as homework for their classes.

Regarding his final conclusion, I'm a little bit more circumspect:

The modern world of the web has brought us a few new formats for communication—like blogs, and social media, and things like Wikipedia. But all of these still follow the basic concept of text + pictures that’s existed since the beginning of the age of literacy. With computational essays we finally have something new.

In many respects, HTML+Javascript pages have been capable of delivering, and actually delivering, computationally generated documents for some time. Whether computational notebooks offer some sort of step-change away from that, or actually represent a return to the original read/write imaginings of the web with portable and computed facts accessed using Linked Data?

Simple Text Analysis Using Python – Identifying Named Entities, Tagging, Fuzzy String Matching and Topic Modelling

Text processing is not really my thing, but here’s a round-up of some basic recipes that allow you to get started with some quick’n’dirty tricks for identifying named entities in a document, and tagging entities in documents.

In this post, I’ll briefly review some getting started code for:

  • performing simple entity extraction from a text; for example, when presented with a text document, label it with named entities (people, places, organisations); entity extraction is typically based on statistical models that rely on document features such as correct capitalisation of names to work correctly;
  • tagging documents that contain exact matches of specified terms: in this case, we have a list of specific text strings we are interested in (for example, names of people or companies) and we want to know if there are exact matches in the text document and where those matches occur in the document;
  • partial and fuzzing string matching of specified entities in a text: in this case, we may want to know whether something resembling a specified text string occurs in the document (for example, mis0spellings of name);
  • topic modelling: the identification, using statistical models, of “topic terms” that appear across a set of documents.

You can find a gist containing a notebook that summarises the code here.

Simple named entity recognition

spaCy is a natural language processing library for Python library that includes a basic model capable of recognising (ish!) names of people, places and organisations, as well as dates and financial amounts.

According to the spaCy entity recognition documentation, the built in model recognises the following types of entity:

  • PERSON People, including fictional.
  • NORP Nationalities or religious or political groups.
  • FACILITY Buildings, airports, highways, bridges, etc.
  • ORG Companies, agencies, institutions, etc.
  • GPE Countries, cities, states. (That is, Geo-Political Entitites)
  • LOC Non-GPE locations, mountain ranges, bodies of water.
  • PRODUCT Objects, vehicles, foods, etc. (Not services.)
  • EVENT Named hurricanes, battles, wars, sports events, etc.
  • WORK_OF_ART Titles of books, songs, etc.
  • LANGUAGE Any named language.
  • LAW A legislation related entity(?)

Quantities are also recognised:

  • DATE Absolute or relative dates or periods.
  • TIME Times smaller than a day.
  • PERCENT Percentage, including “%”.
  • MONEY Monetary values, including unit.
  • QUANTITY Measurements, as of weight or distance.
  • ORDINAL “first”, “second”, etc.
  • CARDINAL Numerals that do not fall under another type.

Custom models can also be trained, but this requires annotated training documents.

#!pip3 install spacy
from spacy.en import English
parser = English()
example='''
That this House notes the announcement of 300 redundancies at the Nestlé manufacturing factories in York, Fawdon, Halifax and Girvan and that production of the Blue Riband bar will be transferred to Poland; acknowledges in the first three months of 2017 Nestlé achieved £21 billion in sales, a 0.4 per cent increase over the same period in 2016; further notes 156 of these job losses will be in York, a city that in the last six months has seen 2,000 job losses announced and has become the most inequitable city outside of the South East, and a further 110 jobs from Fawdon, Newcastle; recognises the losses come within a month of triggering Article 50, and as negotiations with the EU on the UK leaving the EU and the UK's future with the EU are commencing; further recognises the cost of importing products, including sugar, cocoa and production machinery, has risen due to the weakness of the pound and the uncertainty over the UK's future relationship with the single market and customs union; and calls on the Government to intervene and work with hon. Members, trades unions GMB and Unite and the company to avert these job losses now and prevent further job losses across Nestlé.
'''
#Code "borrowed" from somewhere?!
def entities(example, show=False):
    if show: print(example)
    parsedEx = parser(example)

    print("-------------- entities only ---------------")
    # if you just want the entities and nothing else, you can do access the parsed examples "ents" property like this:
    ents = list(parsedEx.ents)
    tags={}
    for entity in ents:
        #print(entity.label, entity.label_, ' '.join(t.orth_ for t in entity))
        term=' '.join(t.orth_ for t in entity)
        if ' '.join(term) not in tags:
            tags[term]=[(entity.label, entity.label_)]
        else:
            tags[term].append((entity.label, entity.label_))
    print(tags)
entities(example)
-------------- entities only ---------------
{'House': [(380, 'ORG')], '300': [(393, 'CARDINAL')], 'Nestlé': [(380, 'ORG')], '\n York , Fawdon': [(381, 'GPE')], 'Halifax': [(381, 'GPE')], 'Girvan': [(381, 'GPE')], 'the Blue Riband': [(380, 'ORG')], 'Poland': [(381, 'GPE')], '\n': [(381, 'GPE'), (381, 'GPE')], 'the first three months of 2017': [(387, 'DATE')], '£ 21 billion': [(390, 'MONEY')], '0.4 per': [(390, 'MONEY')], 'the same period in 2016': [(387, 'DATE')], '156': [(393, 'CARDINAL')], 'York': [(381, 'GPE')], '\n the': [(381, 'GPE')], 'six': [(393, 'CARDINAL')], '2,000': [(393, 'CARDINAL')], 'the South East': [(382, 'LOC')], '110': [(393, 'CARDINAL')], 'Fawdon': [(381, 'GPE')], 'Newcastle': [(380, 'ORG')], 'a month of': [(387, 'DATE')], 'Article 50': [(21153, 'LAW')], 'EU': [(380, 'ORG')], 'UK': [(381, 'GPE')], 'GMB': [(380, 'ORG')], 'Unite': [(381, 'GPE')]}
q= "Bob Smith was in the Houses of Parliament the other day"
entities(q)
-------------- entities only ---------------
{'Bob Smith': [(377, 'PERSON')]}

Note that the way that models are trained typically realises on cues from the correct capitalisation of named entities.

entities(q.lower())
-------------- entities only ---------------
{}

polyglot

A simplistic, and quite slow, tagger, supporting limited recognition of Locations (I-LOC), Organizations (I-ORG) and Persons (I-PER).

#!pip3 install polyglot

##Mac ??
#!brew install icu4c
#I found I needed: pip3 install pyicu, pycld2, morfessor
##Linux
#apt-get install libicu-dev
!polyglot download embeddings2.en ner2.en
[polyglot_data] Downloading package embeddings2.en to
[polyglot_data]     /Users/ajh59/polyglot_data...
[polyglot_data] Downloading package ner2.en to
[polyglot_data]     /Users/ajh59/polyglot_data...
from polyglot.text import Text

text = Text(example)
text.entities
[I-LOC(['York']),
 I-LOC(['Fawdon']),
 I-LOC(['Halifax']),
 I-LOC(['Girvan']),
 I-LOC(['Poland']),
 I-PER(['Nestlé']),
 I-LOC(['York']),
 I-LOC(['Fawdon']),
 I-LOC(['Newcastle']),
 I-ORG(['EU']),
 I-ORG(['EU']),
 I-ORG(['Government']),
 I-ORG(['GMB']),
 I-LOC(['Nestlé'])]
Text(q).entities
[I-PER(['Bob', 'Smith'])]

Partial Matching Specific Entities

Sometimes we may have a list of entities that we wish to match in a text. For example, suppose we have a list of MPs’ names, or a list of ogranisations of subject terms identified in a thesaurus, and we want to tag a set of documents with those entities if the entity exists in the document.

To do this, we can search a text for strings that exactly match any of the specified terms or where any of the specified terms match part of a longer string in the text.

Naive implementations can take a signifcant time to find multiple strings within a tact, but the Aho-Corasick algorithm will efficiently match a large set of key values within a particular text.

## The following recipe was hinted at via @pudo

#!pip3 install pyahocorasick
#https://github.com/alephdata/aleph/blob/master/aleph/analyze/corasick_entity.py

First, construct an automaton that identifies the terms you want to detect in the target text.

from ahocorasick import Automaton

A=Automaton()
A.add_word("Europe",('VOCAB','Europe'))
A.add_word("European Union",('VOCAB','European Union'))
A.add_word("Boris Johnson",('PERSON','Boris Johnson'))
A.add_word("Boris",('PERSON','Boris Johnson'))
A.add_word("Boris Johnson",('PERSON','Boris Johnson (LC)'))

A.make_automaton()
q2='Boris Johnson went off to Europe to complain about the European Union'
for item in A.iter(q2):
    print(item, q2[:item[0]+1])
(4, ('PERSON', 'Boris Johnson')) Boris
(12, ('PERSON', 'Boris Johnson')) Boris Johnson
(31, ('VOCAB', 'Europe')) Boris Johnson went off to Europe
(60, ('VOCAB', 'Europe')) Boris Johnson went off to Europe to complain about the Europe
(68, ('VOCAB', 'European Union')) Boris Johnson went off to Europe to complain about the European Union

Once again, case is important.

q2l = q2.lower()
for item in A.iter(q2l):
    print(item, q2l[:item[0]+1])
(12, ('PERSON', 'Boris Johnson (LC)')) boris johnson

We can tweak the automata patterns to capture the length of the string match term, so we can annotate the text with matches more exactly:

A=Automaton()
A.add_word("Europe",(('VOCAB', len("Europe")),'Europe'))
A.add_word("European Union",(('VOCAB', len("European Union")),'European Union'))
A.add_word("Boris Johnson",(('PERSON', len("Boris Johnson")),'Boris Johnson'))
A.add_word("Boris",(('PERSON', len("Boris")),'Boris Johnson'))

A.make_automaton()
for item in A.iter(q2):
    start=item[0]-item[1][0][1]+1
    end=item[0]+1
    print(item, '{}*{}*{}'.format(q2[start-3:start],q2[start:end],q2[end:end+3]))
(4, (('PERSON', 5), 'Boris Johnson')) *Boris* Jo
(12, (('PERSON', 13), 'Boris Johnson')) *Boris Johnson* we
(31, (('VOCAB', 6), 'Europe')) to *Europe* to
(60, (('VOCAB', 6), 'Europe')) he *Europe*an 
(68, (('VOCAB', 14), 'European Union')) he *European Union*

Fuzzy String Matching

Whilst the Aho-Corasick approach will return hits for strings in the text that partially match the exact match key terms, sometimes we want to know whether there are terms in a text that almost match terms in specific set of terms.

Imagine a situation where we have managed to extract arbitrary named entities from a text, but they do not match strings in a specified list in an exact or partially exact way. Our next step might be to attempt to further match those entities in a fuzzy way with entities in a specified list.

fuzzyset

The python fuzzyset package will try to match a specified string to similar strings in a list of target strings, returning a single item from a specified target list that best matches the provided term.

For example, if we extract the name Boris Johnstone in a text, we might then try to further match that string, in a fuzzy way, with a list of correctly spelled MP names.

A confidence value expresses the degree of match to terms in the fuzzy match set list.

import fuzzyset

fz = fuzzyset.FuzzySet()
#Create a list of terms we would like to match against in a fuzzy way
for l in ["Diane Abbott", "Boris Johnson"]:
    fz.add(l)

#Now see if our sample term fuzzy matches any of those specified terms
sample_term='Boris Johnstone'
fz.get(sample_term), fz.get('Diana Abbot'), fz.get('Joanna Lumley')
([(0.8666666666666667, 'Boris Johnson')],
 [(0.8333333333333334, 'Diane Abbott')],
 [(0.23076923076923073, 'Diane Abbott')])

fuzzywuzzy

If we want to try to find a fuzzy match for a term within a text, we can use the python fuzzywuzzy library. Once again, we spcify a list of target items we want to try to match against.

from fuzzywuzzy import process
from fuzzywuzzy import fuzz
terms=['Houses of Parliament', 'Diane Abbott', 'Boris Johnson']

q= "Diane Abbott, Theresa May and Boris Johnstone were in the Houses of Parliament the other day"
process.extract(q,terms)
[('Houses of Parliament', 90), ('Diane Abbott', 90), ('Boris Johnson', 86)]

By default, we get match confidence levels for each term in the target match set, although we can limit the response to a maximum number of matches:

process.extract(q,terms,scorer=fuzz.partial_ratio, limit=2)
[('Houses of Parliament', 90), ('Boris Johnson', 85)]

A range of fuzzy match scroing algorithms are supported:

  • WRatio – measure of the sequences’ similarity between 0 and 100, using different algorithms
  • QRatio – Quick ratio comparison between two strings
  • UWRatio – a measure of the sequences’ similarity between 0 and 100, using different algorithms. Same as WRatio but preserving unicode
  • UQRatio – Unicode quick ratio
  • ratio
  • `partial_ratio – ratio of the most similar substring as a number between 0 and 100
  • token_sort_ratio – a measure of the sequences’ similarity between 0 and 100 but sorting the token before comparing
  • partial_token_set_ratio
  • partial_token_sort_ratio – ratio of the most similar substring as a number between 0 and 100 but sorting the token before comparing

More usefully, perhaps, is to return items that match above a particular confidence level:

process.extractBests(q,terms,score_cutoff=90)
[('Houses of Parliament', 90), ('Diane Abbott', 90)]

However, one problem with the fuzzywuzzy matcher is that it doesn’t tell us where in the supplied text string the match occurred, or what string in the text was matched.

The fuzzywuzzy package can also be used to try to deduplicate a list of items, returning the longest item in the duplicate list. (It might be more useful if this is optionally the first item in the original list?)

names=['Diane Abbott', 'Boris Johnson','Boris Johnstone','Diana Abbot', 'Boris Johnston','Joanna Lumley']
process.dedupe(names, threshold=80)
['Joanna Lumley', 'Boris Johnstone', 'Diane Abbott']

It might also be useful to see the candidate strings associated with each deduped item, treating the first item in the list as the canonical one:

import hashlib

clusters={}
fuzzed=[]
for t in names:
    fuzzyset=process.extractBests(t,names,score_cutoff=85)
    #Generate a key based on the sorted members of the set
    keyvals=sorted(set([x[0] for x in fuzzyset]),key=lambda x:names.index(x),reverse=False)
    keytxt=''.join(keyvals)
    key=hashlib.md5(keytxt).hexdigest()
    if len(keyvals)>1 and key not in fuzzed:
        clusters[key]=sorted(set([x for x in fuzzyset]),key=lambda x:names.index(x[0]),reverse=False)
        fuzzed.append(key)
for cluster in clusters:
    print(clusters[cluster])
[('Diane Abbott', 100), ('Diana Abbot', 87)]
[('Boris Johnson', 100), ('Boris Johnstone', 93), ('Boris Johnston', 96)]

OpenRefine Clustering

As well as running as a browser accessed application, OpenRefine also runs as a service that can be accessed from Python using the refine-client.py client libary.

In particular, we can use the OpenRefine service to cluster fuzzily matched items within a list of items.

#!pip install git+https://github.com/PaulMakepeace/refine-client-py.git
#NOTE - this requires a python 2 kernel
#Initialise the connection to the server using default or environment variable defined server settings
#REFINE_HOST = os.environ.get('OPENREFINE_HOST', os.environ.get('GOOGLE_REFINE_HOST', '127.0.0.1'))
#REFINE_PORT = os.environ.get('OPENREFINE_PORT', os.environ.get('GOOGLE_REFINE_PORT', '3333'))
from google.refine import refine, facet
server = refine.RefineServer()
orefine = refine.Refine(server)
#Create an example CSV file to load into a test OpenRefine project
project_file = 'simpledemo.csv'
with open(project_file,'w') as f:
    for t in ['Name']+names+['Boris Johnstone']:
        f.write(t+ '\n')
!cat {project_file}
Name
Diane Abbott
Boris Johnson
Boris Johnstone
Diana Abbot
Boris Johnston
Joanna Lumley
Boris Johnstone
p=orefine.new_project(project_file=project_file)
p.columns
[u'Name']

OpenRefine supports a range of clustering functions:

- clusterer_type: binning; function: fingerprint|metaphone3|cologne-phonetic
- clusterer_type: binning; function: ngram-fingerprint; params: {'ngram-size': INT}
- clusterer_type: knn; function: levenshtein|ppm; params: {'radius': FLOAT,'blocking-ngram-size': INT}
clusters=p.compute_clusters('Name',clusterer_type='binning',function='cologne-phonetic')
for cluster in clusters:
    print(cluster)
[{'count': 1, 'value': u'Diana Abbot'}, {'count': 1, 'value': u'Diane Abbott'}]
[{'count': 2, 'value': u'Boris Johnstone'}, {'count': 1, 'value': u'Boris Johnston'}]

Topic Models

Topic models are statistical models that attempts to categorise different “topics” that occur across a set of docments.

Several python libraries provide a simple interface for the generation of topic models from text contained in multiple documents.

gensim

#!pip3 install gensim
#https://github.com/sgsinclair/alta/blob/e5bc94f7898b3bcaf872069f164bc6534769925b/ipynb/TopicModelling.ipynb
from gensim import corpora, models

def get_lda_from_lists_of_words(lists_of_words, **kwargs):
    dictionary = corpora.Dictionary(lists_of_words) # this dictionary maps terms to integers
    corpus = [dictionary.doc2bow(text) for text in lists_of_words] # create a bag of words from each document
    tfidf = models.TfidfModel(corpus) # this models the significance of words using term frequency inverse document frequency
    corpus_tfidf = tfidf[corpus]
    kwargs["id2word"] = dictionary # set the dictionary
    return models.LdaModel(corpus_tfidf, **kwargs) # do the LDA topic modelling

def print_top_terms(lda, num_terms=10):
    txt=[]
    num_terms=min([num_terms,lda.num_topics])
    for i in range(0, num_terms):
        terms = [term for term,val in lda.show_topic(i,num_terms)]
        txt.append("\t - top {} terms for topic #{}: {}".format(num_terms,i,' '.join(terms)))
    return '\n'.join(txt)

To start with, let’s create a list of dummy documents and then generate word lists for each document.

docs=['The banks still have a lot to answer for the financial crisis.',
     'This MP and that Member of Parliament were both active in the debate.',
     'The companies that work in finance need to be responsible.',
     'There is a reponsibility incumber on all participants for high quality debate in Parliament.',
     'Corporate finance is a big responsibility.']

#Create lists of words from the text in each document
from nltk.tokenize import word_tokenize
docs = [ word_tokenize(doc.lower()) for doc in docs ]

#Remove stop words from the wordlists
from nltk.corpus import stopwords
docs = [ [word for word in doc if word not in stopwords.words('english') ] for doc in docs ]

Now we can generate the topic models from the list of word lists.

topicsLda = get_lda_from_lists_of_words([s for s in docs if isinstance(s,list)], num_topics=3, passes=20)
print( print_top_terms(topicsLda))
     - top 3 terms for topic #0: parliament debate active
     - top 3 terms for topic #1: responsible work need
     - top 3 terms for topic #2: corporate big responsibility

The model is randomised – if we run it again we are likely to get a different result.

topicsLda = get_lda_from_lists_of_words([s for s in docs if isinstance(s,list)], num_topics=3, passes=20)
print( print_top_terms(topicsLda))
     - top 3 terms for topic #0: finance corporate responsibility
     - top 3 terms for topic #1: participants quality high
     - top 3 terms for topic #2: member mp active

Coping With Time Periods in Python Pandas – Weekly, Monthly, Quarterly

One of the more powerful, and perhaps underused (by me at least), features of the Python/pandas data analysis package is its ability to work with time series and represent periods of time as well as simple “pointwise” dates and datetimes.

Here’s a link to a first draft Jupyter notebook showing how to cast weekly, monthly and quarterly periods in pandas from NHS time series datasets: Recipes – Representing Time Periods.ipynb

 


Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.


Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.