Fragment – Jupyter Kernels / MyBinder as a Remote Code Execution Sandbox for Moodle

Although I don’t know for sure, I suspect that administrators of computing infrastructure in educational establishments are wary of requests from academics for compute services that allow students to run arbitrary code.

One of the main reasons why an educator would want to support this is that becuase setting up an environment can be hard: if you want a student to focus on writing code that makes use of particular packages, you probably don’t want them engaging in arcane sys admin practices and spending all them time trying to install those packages in the first place.

For the IT department, the thought of running arbitrary code that could be produced either by novices or deliberately malicious users is likely to raise several well-founded concerns: how do we stop users using the code environment to attack the server or network the code is running on; how do we stop folk from running code on out servers that could be used to attack external sites; and how do we control the resource requirements (storage, compute, network) when mistakes happen and folk try to repeatedly download the internet to our server.

One way of making hosted compute available to students is to execute code within isolated sandboxed environments that you can park in a safe area of the network and monitor closely.

In our Moodle VLE, the Moodle CodeRunner environment is used to allow students to run small fragments of code within just such an environment when completing interactive quiz questions. (I provide a quick review of the Moodle CodeRunner plugin in post [A] Quick First Look At Moodle CodeRunner.)

Presumably, someone somewhere has done a security audit and decided that the sandboxed code execution environment is a safe one and signed off on its use.

Another approach, described in this fragment on Jupyter Notebooks and Moodle, the SageCell filter for Moodle, allows you to run code against an external (stateless) SageCell server:

<?php
/**
 * SageCell filter for Moodle 3.4+
 *
 *  This filter will replace any Sage code in [sage]...[/sage]
 *  with a Ajax code from http://sagecell.sagemath.org
 *
 * @package    filter_sagecell
 * @copyright  2015-2018 Eugene Modlo, Sergey Semerikov
 * @license    http://www.gnu.org/copyleft/gpl.html GNU GPL v3 or later
 */

defined('MOODLE_INTERNAL') || die();

/**
 * Automatic SageCell embedding filter class.
 *
 * @package    filter_sagecell
 * @copyright  2015-2016 Eugene Modlo, Sergey Semerikov
 * @license    http://www.gnu.org/copyleft/gpl.html GNU GPL v3 or later
 */
class filter_sagecell extends moodle_text_filter {

    /**
     * Check text for Sage code in [sage]...[/sage].
     *
     * @param string $text
     * @param array $options
     * @return string
     */
    public function filter($text, array $options = array()) {

        if (!is_string($text) or empty($text)) {
            // Non string data can not be filtered anyway.
            return $text;
        }

        if (strpos($text, '[sage]') === false) {
            // Performance shortcut - if there is no </a> tag, nothing can match.
            return $text;
        }

        $newtext = $text; // Fullclone is slow and not needed here.

        $search = '/\[sage](.+?)\[\/sage]/is';
        $newtext = preg_replace_callback($search, 'filter_sagecell_callback', $newtext);

        if (is_null($newtext) or $newtext === $text) {
            // Error or not filtered.
            return $text;
        }

        return $newtext;
    }

}

/**
 * Replace Sage code with embedded SageCell, if possible.
 *
 * @param array $sagecode
 * @return string
 */
function filter_sagecell_callback($sagecode) {

    // SageCell code from [sage]...[/sage].
    $output = $sagecode[1];
    $output = str_ireplace("", "\n", $output);
    $output = str_ireplace("

", "\n", $output);
    $output = str_ireplace("
", "\n", $output);
    $output = str_ireplace("
", "\n", $output);
    $output = str_ireplace("
", "\n", $output);
    $output = str_ireplace("&nbsp;", "\x20", $output);
    $output = str_ireplace("\xc2\xa0", "\x20", $output);
    $output = clean_text($output);
    $output = str_ireplace("&lt;", "", $output);

    $id = uniqid("");

    $output = "" .
    "" .
        "sagecell.makeSagecell({inputLocation: \"#" . $id . "\"," .
        "evalButtonText: \"Evaluate\"," .
        "autoeval: true," .
        "hide: [\"evalButton\", \"editor\", \"messages\", \"permalink\", \"language\"] }" .
    ");" .
    "" .
    "
<div id="">". $output. "</div>
";

    return $output;
}

This looks to me like the SageCell Moodle filter essentially rewrites a [sage]...[/sage] delimited code block within a Moodle environment as a Javascript backed SageCell form and then lets users run the code embedded in the form against the remote server. This sort of thing could presumably be used to support interactive, executable code activities within a Moodle hosted web page, for example.

As I remarked previously, it’s not hard to imagine doing something similar to provide a [mybinder repository="..."]...[/mybinder]​ filter that could use a Javascript library such as ThebeLab or Juniper to provide a similar style of interaction backed by a MyBinder launched repository, though minor tweaks may be required around those packages to handle stateless rather than stateful transactions if repeated calls are made to the server.

Going back to the CodeRunner plugin (as described here):

[i]nternally CodeRunner is designed to support multiple sandboxes, implemented as subclasses of the abstract class qtype_coderunner_sandbox – see sandbox.php. Sandboxes are essentially plugins to CodeRunner. Several different ones have been used over the years but the only current ones are the jobe sandbox (file jobesandbox.php) and the ideone sandbox. The latter interfaces to the Sphere On-line judge server but is now more-or-less defunct. Both of those sandboxes run as services. CodeRunner can support multiple sandboxes at the same time and questions can be configured to select a particular sandbox (if desired). By default the first available sandbox that supports the language required by the question is used.

So could we use a MyBinder launched Jupyter server to provide sandboxed code execution?

One advantage of this would be that we could define a Jupyter environment that students could use on their own machines, or that we could host via a hosted notebook server, and that same environment could be used for CodeRunner style assessment.

Another advantage would be that if we want to run student created arbitrary code for teaching activities as well as CodeRunner based assessment activities, we’d only need to sign off on one sandboxed code execution environment rather than several.

So what’s required?

It’s years since I had used PHP, but I thought I’d have a go at creating a simple Python client that would let me:

  • start a MyBinder server against a specified Github repo;
  • start a kernel;
  • run a small code sample in the kernel and get a code execution response back.

Cribbing heavily from juniper.js and this rather handy sagecell-client.py, I came up with a hacky recipe that works a minimal proof of concept here: mybinder_py_client-ipynb.

I think this is stateful, in that we execute several code blocks one after the other and exploit state in previous calls to the same kernel. It would probably also make sense to have a call that forces a new kernel for each code execution call, as well as providing a recipe for killing a kernel.

The next step in trying to use this approach for CodeRunner sandbox would presumably be to try to create a simple PHP based MyBinder client; then the next step would be to use that in a CodeRunner sandbox subclass.

But that’s out of scope for me atm…

Please let me know in the comments if you have a go at this… or know of any other Moodle / Jupyter integrations…

Simple Self-Test and Feedback in Jupyter Notebooks — Ordo

Some time ago I came across ordo, “a lightweight feedback tool for Jupyter”; here’s a quick initial review of what we can do with it (Binderised demo)…

Installing and enabling the extension gives you a couple of toolbar buttons:

The tick is “Feedback Mode” for running cells and evaluating the output, the pencil is “Edit Mode” for creating/editing feedback messages.

The README.ipynb demo notebook has some feedback cells already set up. For example, the first cell tests a simple sum. In “Feedback mode”, if you get an incorrect answer, you are alerted to the fact with an error message, which can either be the default message or a custom one assigned to that cell.

Clicking the eye reveals the answer; when you get the answer right, you are awarded with confirmatory feedback, again, either as a default message or as a custom message defined for that cell.

In the edit mode, you can click in a code cell and raise some value setting controls for the cell:

If you click the Make Solution button, the current cell output is set as the desired output.

Alternatively, you can explicitly set the desired solution, as well as custom success/failure messages on each cell:

Making a custom solution allows you to specify different sorts of output cell types… I think using Make Solution is probably easier!

Note that if you do opt to explicitly define a solution, any previous solution will not be displayed.

However, you can see  the desired output in the corresponding cell metadata field:

The same is true if you add custom success or failure messages:

As before, the cell metadata does reveal what the current value is if the default feedback message has been changed.

For example, if we assign the following success feedback message to a cell:

the cell metadata updated with the non-default value:

Ordo looks like a really handy tool for baking explicit answers for cell based tests into notebook metadata. As such, it could be good as a quick way of implementing formative feedback into teaching notebooks, as long as the users have the ordo extension installed and enabled in the notebook server they are accessing the notebooks from.

If you can’t explicitly declare the exact answer you’re expecting as the cell output, it isn’t much use though…

PS A couple of other comments…

An ordo annotated notebook degrades gracefully in the sense that if the extension is not annotated, no bad things happen, you just don’t get the test run and the feedback displayed.

The ability to close the alert messages is neat. Peaking at the code:

ordo-dismissible
it looks like we can just add:
&lt;button class="close" type="button" data-dismiss="alert"&gt;×&lt;/button&gt;
into the div and that gives us a button we can use to collapse the alert box.

[I note WordPress is still crap at handling taglike text… WTF do I have to do to get it to display properly?]

The alert-dismissible class attribute does not seem to be required.

This dismissible behaviour could be used when using alert boxes in eg ​​nbgrader test generated feedback because it could be used to provide messages for markers that they could collapse… hmm… would that definitely delete it from the feedback document? I suppose if we added the alert-dismissible class attribute, we could also filter such divs out in an nbgrader feedback generator processor?

Quick Review – Jupyter Multi Outputs Notebook Extension

This post represents a quick review of the Jupyter multi-outputs Jupyter notebook extension.

The extension is one of a series of extensions developed by the Japanese National Institute of Informatics (NII) Literate Computing for Reproducible Infrastructure project.

My feeling is that some of these notebook extensions may also be useful in an educational context for supporting teaching and learning activities within Jupyter notebooks, and I’ll try to post additional reviews of some of the other extensions.

So what does the multi-outputs extension offer?

Running cells highlights a running cell with light blue a successfully run cell (or one that runs to completion with a warning) green and one that fails to run to completion due to an error as pink/red.

If you reload the notebook in the browser, the colour highlighting is lost. Similarly if you close the notebook and then open it again (whether or not the kernel is kept alive or restarted). This nudges you towards always shutting down the kernel when you close a notebook, or always restarting the kernel if you reload a notebook page in the browser, if you want to benefit fully from the semantics associated with the cell colour highlighting.

We can also save the output of a cell into a tab identified by the cell execution number. Once the cell is run, click on the pin item in the left hand margin to save that cell output:

The output is saved into a tab numbered according to the cell execution count number. You can now run the cell again:

and click on the previously saved output tab. You may notice that when you select a previous output tab that a left/right arrow “show differences” icon appears:

Click on that output to compare the current and previous outputs:

(I find the graphic display a little confusing, but’s typical for many differs! If you look closely, you may seen green (addition) and red (deletion) highlighting.)

The differ display also supports simple search (you need to hit Return to register the search term as such.)

The saved output is actually saved as notebook metadata associated with the cell, which means it will persist when the notebook is closed and restarted at a later date.

One of the hacky tools I’ve got in tm351_utils (which really needs some example notebooks…) is a simple differencing display. I’m not sure if any of the TM351 notebooks I suggested during the last round of revisions that used the differ made it into the finally released notebooks, but it might be worth comparing that approach, of diffing across the outputs of two cells, with this approach, of diffing between two outputs from the same cell run at different times/with different parameters/state.

Config settings appear to be limited to the maximum number of saved / historical tabs per cell:

So, useful? I’ll try to work up some education related examples. (If you have any ideas for some, or have already identified and/or demonstrated some, please let me know via the comments.)

On Not Faffing Around With Jupyter Notebook Docker Container Auth Tokens

Mark this post as deprecated… There already exists an easy way of setting the token when starting one of the Jupyter notebook Docker containers: -e JUPYTER_TOKEN="easy; it's already there". In fact, things are even easier if you export JUPYTER_TOKEN='easy' in the local environment, and then start the container with docker run --rm -d --name democontainer -p 9999:8888 -e JUPYTER_TOKEN jupyter/base-notebook (which is equivalent to -e JUPYTER_TOKEN=$JUPYTER_TOKEN). You can then autolaunch into the notebook with open "http://localhost:9999?token=${JUPYTER_TOKEN}". H/t @minrk for that…

[UPDATE: an exercise in reinventing the wheel… This is why I should really do something else with my life…]

I know they’re there for good reason, but starting the official Jupyter containers requires that you enter a token created when you launch the container, which means you need to check the docker logs…

In terms of usability, this is a bit of a faff. For example, the example URL is not necessarily the correct one (it specifies the port the notebook is running on inside the container rather than the exposed port you have mapped it to.

If you start the container with a -d flag, you don’t see the token (something that looks like the token is printed out but it’s not the token, it’s docker created…). However, you can see the log stream containing the token using Kitematic.

If you go directly to the notebook page without the token argument, you’ll need to login with it, or with a default password (which is not set in the official Jupyter Docker images).

To provide continued authenticated access, you also have the opportunity at the bottom of that screen to swap the token for a new password (this is via the c.NotebookApp.allow_password_change setting which by default is set to True):

I think the difference between default token and password is that in the config file, if you specify a token via the c.NotebookApp.token argument, you do so in plain text, whereas the c.NotebookApp.password  setting takes an MD5 hashed value. If you set c.NotebookApp.token='', you can get in without a token. For a full set of config settings, see the Jupyter notebook config file and command line options.

So, can we balance the need for a small amount security without going to the extreme of disabling auth altogether?

Here’s a Dockerfile I’ve just popped together that allows you to build a variant of the official containers with support for tokenless or predefined token access:

#Dockerfile
FROM jupyter/minimal-notebook

#Configure container to support easier access
ARG TOKEN=-1
RUN mkdir -p $HOME/.jupyter/
RUN if [ $TOKEN!=-1 ]; then echo "c.NotebookApp.token='$TOKEN'" >> $HOME/.jupyter/jupyter_notebook_config.py; fi

We can then build variations on a theme as follows by running the following build commands in the same directory as the Dockerfile:

# Automatically generated token (default behaviour)
docker build -t psychemedia/quicknotebook .

# Tokenless access (no auth)
docker build -t psychemedia/quicknotebook --build-arg TOKEN='' .

# Specified one time token (set your own plain text one time token)
docker build -t psychemedia/quicknotebook --build-arg TOKEN='letmein' .

And some more handy administrative commands, just for the record:

#Run the container
docker run --rm -d -p 8899:8888 --name quicknotebook psychemedia/quicknotebook
##Or:
docker run --rm -d --expose 8888 --name quicknotebook psychemedia/quicknotebook

#Stop the container
docker kill quicknotebook

#Tidy up after running if you didn't --rm
docker rm quicknotebook

#Push container to Docker hub (must be logged in)
docker push psychemedia/quicknotebook

I’m also starting to wonder whether there’s an easy way of using Docker ENV vars (passed in the docker run command via a -e MYVAR='myval' pattern) to allow containers to be started up with a particular token, not just created with specified tokens at build time? That would take some messing around with the container start command though…

There’s a handy guide to Dockerfile ARG and ENV vars here: Docker ARG vs ENV.

Hmm… looking at the start.sh script that runs as part of the base notebook start CMD, it looks like there’s a /usr/local/bin/start-notebook.d/ directory that can contain files that are executed prior to the notebook server starting…

So we can presumably just hack that to take an environment variable?

So let’s extend the Dockerfile:

ENV TOKEN=$TOKEN
USER root
RUN mkdir -p /usr/local/bin/start-notebook.d/
RUN echo  "if [ \$TOKEN!=-1 ]; then echo \"c.NotebookApp.token='\$TOKEN'\" >> $HOME/.jupyter/jupyter_notebook_config.py; fi" >> /usr/local/bin/start-notebook.d/tokeneffort.sh
RUN chmod +x /usr/local/bin/start-notebook.d/tokeneffort.sh
USER $NB_USER

Now we should also be able to set a one time token when we run the container:

docker run -d -p 8899:8888 --name quicknotebook -e TOKEN='letmeout' psychemedia/quicknotebook

Useful? [Not really, completely pointless; passing the token as an environment variable is already supported (which raises the question; how come I’ve kept missing this trick?!) At best, it was a refresher in the use of Dockerfile ARG and ENV vars.]

Running a PostgreSQL Server in a MyBinder Container

The original MyBinder service used to run an optional PostgreSQL DBMS alongside the Jupyter notebook service inside a Binder container (my original review).

But if you want to run a Postgres database in the same MyBinder environment nowadays, you need to add it in yourself.

Here are some recipes with different pros and cons. As @manics comments here, “[m]ost distributions package postgres to be run as a system service, so the user permissions are locked down.”, which means that you can’t run Postgres as an arbitrary user. The best approach is probably the last one, which uses an Anaconda packaged version of Postgres that has a more liberal attitude…

Recipe the First – Hacking Permissions

I picked up this approach from dchud/datamanagement-notebook/ based around Docker. It gets around the problem that the Postgres Linux package requires a particular user (postgres) or an alternative user with root permissions to start and stop the server.

Use a Dockerfile to install postgres and create a simple database test user, as well as escalating default user notebook jovyan to sudoers (along with the password redspot). The jovyan user can then start / stop the Postgres server via an appropriate entrypoint script.

USER root

RUN chown -R postgres:postgres /var/run/postgresql
RUN echo "jovyan ALL=(ALL)   ALL" >> /etc/sudoers
RUN echo "jovyan:redspot" | chpasswd

COPY ./entrypoint.sh /
RUN chmod +x /entrypoint.sh

USER $NB_USER
ENTRYPOINT ["/entrypoint.sh"]

The entrypoint.sh script will start the Postgres server and then continue with any other start-up actions required to start the Jupyter notebook server install by repo2docker/MyBinder by default:

#!/bin/bash
set -e

echo redspot | sudo -S service postgresql start

exec "$@"

Try it on MyBinder from here.

A major issue with this approach is that you may not want jovyan, or another user, to have root privileges.

Recipe The Second – Hacking Fewer Permissions

The second example comes from @manics/@crucifixkiss and is based on manics/omero-server-jupyter.

In this approach, which also uses a Dockerfile, we again escalate the privileges of the jovyan user, although this time in a more controlled way:

USER root

#The trick in this Dockerfile is to change the ownership of /run/postgresql
RUN  apt-get update && \
    apt-get install -qq -y \
        postgresql postgresql-client && apt-get clean && \
    chown jovyan /run/postgresql/

COPY ./entrypoint.sh  /
RUN chmod +x /entrypoint.sh

In this case, the entrypoint.sh script doesn’t require any tampering with sudo:

#!/bin/bash
set -e

PGDATA=${PGDATA:-/home/jovyan/srv/pgsql}

if [ ! -d "$PGDATA" ]; then
  /usr/lib/postgresql/10/bin/initdb -D "$PGDATA" --auth-host=md5 --encoding=UTF8
fi
/usr/lib/postgresql/10/bin/pg_ctl -D "$PGDATA" status || /usr/lib/postgresql/10/bin/pg_ctl -D "$PGDATA" -l "$PGDATA/pg.log" start

psql postgres -c "CREATE USER testuser PASSWORD 'testpass'"
createdb -O testuser testdb

exec "$@"

You can try it on MyBinder from here.

Recipe the Third – An Alternative Distribution

The third approach is again via @manics and uses an Anaconda packaged version of Postgres, installing the postgresql package via an environment.yml file.

A postbuild step initialises everything and pulls in a script to set up a dummy user and database.

#!/bin/bash
set -eux

#Make sure that everything is initialised properly
PGDATA=${PGDATA:-/home/jovyan/srv/pgsql}
if [ ! -d "$PGDATA" ]; then
  initdb -D "$PGDATA" --auth-host=md5 --encoding=UTF8
fi

#Start the database during the build process
# so that we can seed it with users, a dummy seeded db, etc
pg_ctl -D "$PGDATA" -l "$PGDATA/pg.log" start

#Call a script to create a dummy user and seeded dummy db
#Make sure that the script is executable...
chmod +x $HOME/init_db.sh
$HOME/init_db.sh

For example, here’s a simple init_db.sh script:

#!/bin/bash
set -eux

THISDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

#Demo PostgreSQL Database initialisation
psql postgres -c "CREATE USER testuser PASSWORD 'testpass'"

#The -O flag below sets the user: createdb -O DBUSER DBNAME
createdb -O testuser testdb

psql -d testdb -U testuser -f $THISDIR/seed_db.sql

which in turn pulls in a simple .sql file to seed the dummy database:

-- Demo PostgreSQL Database initialisation

DROP TABLE IF EXISTS quickdemo CASCADE;
CREATE TABLE quickdemo(id INT, name VARCHAR(20), value INT);
INSERT INTO quickdemo VALUES(1,'This',12);
INSERT INTO quickdemo VALUES(2,'That',345);

Picking up on the recipe described in an earlier post (AutoStarting A Headless OpenRefine Server in MyBinder Using Repo2Docker and a start Config File), the database is autostarted using a start file:

#!/bin/bash
set -eux
PGDATA=${PGDATA:-/home/jovyan/srv/pgsql}
pg_ctl -D "$PGDATA" -l "$PGDATA/pg.log" start

exec "$@"

In a Jupyter notebook, we can connect to the database in several ways.

For example, we can connect directly using the the psycopg2 package:

import psycopg2

conn = psycopg2.connect("dbname='postgres'")
cur = conn.cursor()
cur.execute("SELECT datname from pg_database")

cur.fetchall()

Alternatively we can connect using something like ipython-sql magic, using a connection string that attaches us using a passwordless connection string as the default (jovyan) user and default connection details (we use default ports etc.): postgresql:///postgres

Or we can go to the other extreme, and use a connection string that connects us using the test user credentials, explicit host/port details, and a specified database: postgresql://testuser:testpass@localhost:5432/testdb

You can try it on MyBinder from here.

First Play With nbgallery

Having hacked together a bulk uploader for nbgallery and uploaded the TM351 notebooks to a test environment, I’m now in a position to start having a play with it.

All public notebooks are searchable, so how does the search fare?

The search box top right gets a little bit lost in the search results listing. It could be handy to at least print out the search string (“Searching for: …”) at the top of the results list, if not making the search box larger and in a more central location. The search results themselves take the form of the name / description/tag of each hit (i.e. the notebook metadata) along with a fragment showing how the search terms appeared in context within the notebook.

Some of my earlier experiments on notebook search here and here also show context.

A range of options are provided for ordering the results. Trending looks like it could be interesting (this is based on recent views, presumably), for example where students are searching notebooks relevant to the current week’s study.

That said, we can also display notebooks by tag, so it’s easy enough to display notebooks associated with a particular week’s study if we tag notebooks by study week:

(One thing I noticed zooming out on the page to grab the above screenshot is that the font size of the notebook titles doesn’t seem to respond to the zoom level; it would probably be worth checking to see if there are other accessibility issues.)

If we click through on a result, we see a list of related notebooks followed by a preview of the notebook. (nbgallery strips out all cell outputs on upload, so no cell outputs are displayed).

To search through the preview, we can use a normal browser in-page search (ctrl/cmd-F).

A range of options are provided to support community activity around a notebook for logged in users, including the ability to “star” a notebook, provide feedback or add a comment:

Logged in users can also click on the notebook tags to edit them.

Via the Further options menu, users can view various notebook metrics, email a notebook, or propose a change request:

The metrics available include number of views, runs, stars and the edit history.

If comments have been provided, the number indicator by the comment flag shows how many comments have been received, although this only appears on the notebook page. There doesn’t appear to be an indicator of how many comments are associated with a notebook on the search results page, nor did I spot a general “recent comments” feed anywhere.

When you post a comment, there is no indication that you have done so and the form remains in place. You need to close it manually. (Hitting “Post Comment” again just pops up a “can’t do that” alert on the grounds that you’re trying to post a duplicate comment.)

The comments themselves look as if they are an ordered (rather than threaded) list. It also looks like any signed in used can edit anybody else’s comment?

Users who aren’t signed in can download a notebook, but not star it, comment on it, modify the tags etc.

When I tried to add feedback, I got an error:

I’m not sure if there are settings I need to tweak to address that?

Logged in users can also run a notebook from nbgallery via an associated notebook server. (I’d prefer it if the Run in Jupyter flash wasn’t displayed if there isn’t a linked notebook server available for the logged in user.) For example, running a notebook server on  port 443 on the same host as nbgallery using the nbgallery notebook container:

docker run --rm -p 443:443 -e "NBGALLERY_URL=http://localhost:3000" -e "NBGALLERY_CONFIG_TOKEN=letmein" nbgallery/jupyter-alpine

starts a notebook server with the nbgallery extension pre-installed.

We can view the notebook server homepage on https://localhost:443 and log into it using the token-as-password letmein. Running the container in the way described above also gives permission for the nbgallery server running in on http://localhost:3000 to open notebooks via the notebook server.

Within nbgallery itself, a logged in user can associate one of more Jupyter environments via the user menu:

Each environment is given a name and the URL of the associated notebook server (in this case, https://localhost:443):

When a notebook server is associated with a user, notebooks can be opened from nbgallery within the notebook server.

If we create a new notebook in the linked notebook server, we can upload it to nbgallery, adding a title, description and optional tags as in a manual notebook upload step:

If we modify the notebook that is linked to one in the gallery (that is, that has been uploaded to the gallery or launched from the gallery), we can save a change to the gallery or submit a change request:

When uploading a new version, you can add tags but not additional comments such as a commit message:

Viewing the notebook details in nbgallery, we can see a summary of the change history:

We can also click through to a preview of each version of the notebook:

(The revision number doesn’t appear in the change history though, so it can be hard to reconcile a particular version with it’s appearance in the change history listing.)

A logged in user can make a change request to someone else’s notebooks by uploading a new version of them or by opening the notebook in the linked notebook server and submitting a change request:

When I submitted the change request, I got an error form in response, but it looks like the change request was made, as this listing of Change Requests from the user menu suggests:

An exclamation mark by the user menu also identifies that change requests are pending.

Viewing the change request provides a view over the current version of the notebook and the proposed changes. Notebooks can be viewed alongside each other or the diffs can be viewed:

The thumbs up/down indicators are used to accept or deny a change request, along with a brief comment:

Accepted changed notebooks are used to replace the current version of the notebook, and the change logged in the change history. Denied change requests are recorded as such in the change requests list, with a link to the version of the notebook containing the unsuccessfully proposed changes:

If feedback was provided, a comment icon identifies its presence and pops up the feedback in a tooltip when hovered over.

Health stats for linked and run notebooks are supposed to be available, but I couldn’t get those to work (as far as the health stat reports were concerned, the notebooks were never run no matter how many times I ran them), so maybe I’m missing something there in the setup too? [UPDATE: health settings run with a flag set: notebook instrumentation docs; specifically, -e NBGALLERY_ENABLE_INSTRUMENTATION=1 in the docker command line.]

I’m not sure how well this would work for managing TM351 notebooks compared to out current Github workflow (which I should write up somewhere). The error responses (whether they’re valid or not) for change requests and feedback are confusing, and I’m not sure how the feedback is handled if and when it works. Not being able to easily spot new comments easily (unless I’m missing something) could be a bit of a pain. That said, the proof would be in the testing-through-use, so I’ll maybe give it a week or two’s trial with some of my own notebook workflows.

In terms of use with students, it could be useful to provide a version of nbgallery with notebooks runnable by students without them having to log in to it. It could also be useful if notebooks could be run ‘inline’ from the notebook preview pages, for example using something like ThebeLab or Voila, particularly if a particular Binderhub repo / config could be specified in metadata somewhere.

Bulk Jupyter Notebook Uploads to nbgallery Using Selenium

I’ve recently started looking at nbgallery [repo], “an enterprise Jupyter Notebook sharing and collaboration platform” written in Ruby. The gallery provides a range of tools, including:

  • a Solr powered notebook search engine;
  • a notebook “health check” (I haven’t tried this yet);
  • integration with Jupyter notebooks, so you can run notebooks (I haven’t tried this yet).

One thing that seems to be lacking is the ability to bulk upload files (for example, contained in a zip file). I haven’t spotted an API either, or a Python wrapper to provide a de facto API. This makes a proper test over lots of notebooks tricky…

UPDATE: it looks like a Python API for nbgallery is on the way… nbgallery/nbgallery-api-python

The notebook upload is a two step process.

The first step requires selection of a notebook, and a required acknowledgement of rights:

The second provides and opportunity to submit a required title and non-null description and a (repeated) rights acknowledgement:

The upload process utilises a multi-part form.

To upload a notebook, a user needs to be logged in.

Creating a new user requires an email confirmation step, which means you need to set up email server details in the docker-compose.yml file. I used my OU ones:

EMAIL_USERNAME: $OU_USERNAME
EMAIL_PASSWORD: $OU_PWD
EMAIL_DOMAIN: open.ac.uk
EMAIL_ADDRESS: ${OUCU}@open.ac.uk
EMAIL_DEFAULT_URL_OPTIONS_HOST: localhost:3000
EMAIL_SERVER: smtp.office365.com

My usual approach for automating this sort of thing would be to have a go with mechanical soup or mechanize, but on a quick first attempt using both of those, I couldn’t get the scraper to work.

Instead, I took the opportunity to have a play with Selenium With Python, a Python wrapper for the Selenium web testing framework. This provides a set of Python functions for automating the launching of a web-browser (Chrome, Safari, Firefox, etc) and the automated clicking of pages viewed within that automated browser.

The full script I used can be found here.

The initialisation looks like this:

from selenium import webdriver

#Selenium package includes several utilitities
# for waiting until things are ready
#https://selenium-python.readthedocs.io/waits.html
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()

#Allow the driver to poll the DOM for up to 10s when
# trying to find an element
driver.implicitly_wait(10)

#We might also want to explicitly define wait conditions
# on a particular element
wait = WebDriverWait(driver, 10)

driver.get("http://localhost:3000/")

The login function looks something like this:

def nbgallery_login(driver, wait, user, pwd):
    ''' Login to nbgallery.
        Return once the login dialogue has disappeared.
    '''

    driver.find_element_by_id("gearDropdown").click()

    element = driver.find_element_by_id("user_email")
    element.click()

    element.clear()
    element.send_keys(user)

    element = driver.find_element_by_id("user_password")
    element.clear()
    element.send_keys(pwd)
    element.click()

    driver.find_element_by_xpath("//input[@value='Login']").click()

The first form script looks like this:

    #path is full path to file
    if not path.endswith('.ipynb'):
        print('Not a notebook (.ipynb) file? [{}]'.format(path))
        return

    #Part 1

    element = wait.until(EC.element_to_be_clickable((By.ID, 'uploadModalButton')))
    element.click()

    driver.find_element_by_id("uploadFile").send_keys(path);
    driver.find_element_by_xpath('//*[@id="uploadFileForm"]/div[3]/div/div/label/input').click()
    driver.find_element_by_id("uploadFileSubmit").click()

And the script to handle the second part of the form looks like this:

    #Part 2
    element = driver.find_element_by_id("stageTitle")
    element.click()

    #Is there notebook metadata we can search for title?
    if not title:
        title = path.split('/')[-1].replace('.ipynb','')
    element.clear()
    element.send_keys(title)

    element = driver.find_element_by_id("stageDescription")
    element.click()

    #Is there notebook metadata we can search for description?
    #Any other notebook metadata we could make use of here?
    element.clear()
    #Description needs to be not null
    desc= 'No description.' if not desc else desc
    element.send_keys(desc)

    element = driver.find_element_by_id("stageTags-tokenfield")
    element.click()
    #time.sleep(1)

    #Handle various tagging styles
    #Is there notebook metadata we can search for tags?
    tags = '' if not tags else tags
    if isinstance(tags, list):
        tags=','.join(tags)
    tags = tags if tags.endswith(',') else tags+','

    element.clear()
    element.send_keys(tags) #need the final comma to set it?

    if private:
        driver.find_element_by_id("stagePrivate").click()

    driver.find_element_by_xpath('//*[@id="stageForm"]/div[9]/div/div/label/input').click()
    driver.find_element_by_id("stageSubmit").click()

    #https://blog.codeship.com/get-selenium-to-wait-for-page-load/
    #Wait for new page to load
    wait.until(EC.staleness_of(driver.find_element_by_tag_name('html')))

Here’s how it plays out:

There’s still stuff that could be added — error trapping for duplicate notebooks, for example — but I think this is enough to let me upload a complete set of course notebooks and see how useful nbgallery is as a way of presenting notebooks.

If it is, and I get the Jupyter notebook server integration working, then I wonder: would it be useable as a notebook navigator in the TM351 VM? It’d probably need really tight integration with the notebook server so that when notebooks are saved they are also committed to the gallery?