On Not Faffing Around With Jupyter Notebook Docker Container Auth Tokens

Mark this post as deprecated… There already exists an easy way of setting the token when starting one of the Jupyter notebook Docker containers: -e JUPYTER_TOKEN="easy; it's already there". In fact, things are even easier if you export JUPYTER_TOKEN='easy' in the local environment, and then start the container with docker run --rm -d --name democontainer -p 9999:8888 -e JUPYTER_TOKEN jupyter/base-notebook (which is equivalent to -e JUPYTER_TOKEN=$JUPYTER_TOKEN). You can then autolaunch into the notebook with open "http://localhost:9999?token=${JUPYTER_TOKEN}". H/t @minrk for that…

[UPDATE: an exercise in reinventing the wheel… This is why I should really do something else with my life…]

I know they’re there for good reason, but starting the official Jupyter containers requires that you enter a token created when you launch the container, which means you need to check the docker logs…

In terms of usability, this is a bit of a faff. For example, the example URL is not necessarily the correct one (it specifies the port the notebook is running on inside the container rather than the exposed port you have mapped it to.

If you start the container with a -d flag, you don’t see the token (something that looks like the token is printed out but it’s not the token, it’s docker created…). However, you can see the log stream containing the token using Kitematic.

If you go directly to the notebook page without the token argument, you’ll need to login with it, or with a default password (which is not set in the official Jupyter Docker images).

To provide continued authenticated access, you also have the opportunity at the bottom of that screen to swap the token for a new password (this is via the c.NotebookApp.allow_password_change setting which by default is set to True):

I think the difference between default token and password is that in the config file, if you specify a token via the c.NotebookApp.token argument, you do so in plain text, whereas the c.NotebookApp.password  setting takes an MD5 hashed value. If you set c.NotebookApp.token='', you can get in without a token. For a full set of config settings, see the Jupyter notebook config file and command line options.

So, can we balance the need for a small amount security without going to the extreme of disabling auth altogether?

Here’s a Dockerfile I’ve just popped together that allows you to build a variant of the official containers with support for tokenless or predefined token access:

#Dockerfile
FROM jupyter/minimal-notebook

#Configure container to support easier access
ARG TOKEN=-1
RUN mkdir -p $HOME/.jupyter/
RUN if [ $TOKEN!=-1 ]; then echo "c.NotebookApp.token='$TOKEN'" >> $HOME/.jupyter/jupyter_notebook_config.py; fi

We can then build variations on a theme as follows by running the following build commands in the same directory as the Dockerfile:

# Automatically generated token (default behaviour)
docker build -t psychemedia/quicknotebook .

# Tokenless access (no auth)
docker build -t psychemedia/quicknotebook --build-arg TOKEN='' .

# Specified one time token (set your own plain text one time token)
docker build -t psychemedia/quicknotebook --build-arg TOKEN='letmein' .

And some more handy administrative commands, just for the record:

#Run the container
docker run --rm -d -p 8899:8888 --name quicknotebook psychemedia/quicknotebook
##Or:
docker run --rm -d --expose 8888 --name quicknotebook psychemedia/quicknotebook

#Stop the container
docker kill quicknotebook

#Tidy up after running if you didn't --rm
docker rm quicknotebook

#Push container to Docker hub (must be logged in)
docker push psychemedia/quicknotebook

I’m also starting to wonder whether there’s an easy way of using Docker ENV vars (passed in the docker run command via a -e MYVAR='myval' pattern) to allow containers to be started up with a particular token, not just created with specified tokens at build time? That would take some messing around with the container start command though…

There’s a handy guide to Dockerfile ARG and ENV vars here: Docker ARG vs ENV.

Hmm… looking at the start.sh script that runs as part of the base notebook start CMD, it looks like there’s a /usr/local/bin/start-notebook.d/ directory that can contain files that are executed prior to the notebook server starting…

So we can presumably just hack that to take an environment variable?

So let’s extend the Dockerfile:

ENV TOKEN=$TOKEN
USER root
RUN mkdir -p /usr/local/bin/start-notebook.d/
RUN echo  "if [ \$TOKEN!=-1 ]; then echo \"c.NotebookApp.token='\$TOKEN'\" >> $HOME/.jupyter/jupyter_notebook_config.py; fi" >> /usr/local/bin/start-notebook.d/tokeneffort.sh
RUN chmod +x /usr/local/bin/start-notebook.d/tokeneffort.sh
USER $NB_USER

Now we should also be able to set a one time token when we run the container:

docker run -d -p 8899:8888 --name quicknotebook -e TOKEN='letmeout' psychemedia/quicknotebook

Useful? [Not really, completely pointless; passing the token as an environment variable is already supported (which raises the question; how come I’ve kept missing this trick?!) At best, it was a refresher in the use of Dockerfile ARG and ENV vars.]

Running a PostgreSQL Server in a MyBinder Container

The original MyBinder service used to run an optional PostgreSQL DBMS alongside the Jupyter notebook service inside a Binder container (my original review).

But if you want to run a Postgres database in the same MyBinder environment nowadays, you need to add it in yourself.

Here are some recipes with different pros and cons. As @manics comments here, “[m]ost distributions package postgres to be run as a system service, so the user permissions are locked down.”, which means that you can’t run Postgres as an arbitrary user. The best approach is probably the last one, which uses an Anaconda packaged version of Postgres that has a more liberal attitude…

Recipe the First – Hacking Permissions

I picked up this approach from dchud/datamanagement-notebook/ based around Docker. It gets around the problem that the Postgres Linux package requires a particular user (postgres) or an alternative user with root permissions to start and stop the server.

Use a Dockerfile to install postgres and create a simple database test user, as well as escalating default user notebook jovyan to sudoers (along with the password redspot). The jovyan user can then start / stop the Postgres server via an appropriate entrypoint script.

USER root

RUN chown -R postgres:postgres /var/run/postgresql
RUN echo "jovyan ALL=(ALL)   ALL" >> /etc/sudoers
RUN echo "jovyan:redspot" | chpasswd

COPY ./entrypoint.sh /
RUN chmod +x /entrypoint.sh

USER $NB_USER
ENTRYPOINT ["/entrypoint.sh"]

The entrypoint.sh script will start the Postgres server and then continue with any other start-up actions required to start the Jupyter notebook server install by repo2docker/MyBinder by default:

#!/bin/bash
set -e

echo redspot | sudo -S service postgresql start

exec "$@"

Try it on MyBinder from here.

A major issue with this approach is that you may not want jovyan, or another user, to have root privileges.

Recipe The Second – Hacking Fewer Permissions

The second example comes from @manics/@crucifixkiss and is based on manics/omero-server-jupyter.

In this approach, which also uses a Dockerfile, we again escalate the privileges of the jovyan user, although this time in a more controlled way:

USER root

#The trick in this Dockerfile is to change the ownership of /run/postgresql
RUN  apt-get update && \
    apt-get install -qq -y \
        postgresql postgresql-client && apt-get clean && \
    chown jovyan /run/postgresql/

COPY ./entrypoint.sh  /
RUN chmod +x /entrypoint.sh

In this case, the entrypoint.sh script doesn’t require any tampering with sudo:

#!/bin/bash
set -e

PGDATA=${PGDATA:-/home/jovyan/srv/pgsql}

if [ ! -d "$PGDATA" ]; then
  /usr/lib/postgresql/10/bin/initdb -D "$PGDATA" --auth-host=md5 --encoding=UTF8
fi
/usr/lib/postgresql/10/bin/pg_ctl -D "$PGDATA" status || /usr/lib/postgresql/10/bin/pg_ctl -D "$PGDATA" -l "$PGDATA/pg.log" start

psql postgres -c "CREATE USER testuser PASSWORD 'testpass'"
createdb -O testuser testdb

exec "$@"

You can try it on MyBinder from here.

Recipe the Third – An Alternative Distribution

The third approach is again via @manics and uses an Anaconda packaged version of Postgres, installing the postgresql package via an environment.yml file.

A postbuild step initialises everything and pulls in a script to set up a dummy user and database.

#!/bin/bash
set -eux

#Make sure that everything is initialised properly
PGDATA=${PGDATA:-/home/jovyan/srv/pgsql}
if [ ! -d "$PGDATA" ]; then
  initdb -D "$PGDATA" --auth-host=md5 --encoding=UTF8
fi

#Start the database during the build process
# so that we can seed it with users, a dummy seeded db, etc
pg_ctl -D "$PGDATA" -l "$PGDATA/pg.log" start

#Call a script to create a dummy user and seeded dummy db
#Make sure that the script is executable...
chmod +x $HOME/init_db.sh
$HOME/init_db.sh

For example, here’s a simple init_db.sh script:

#!/bin/bash
set -eux

THISDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

#Demo PostgreSQL Database initialisation
psql postgres -c "CREATE USER testuser PASSWORD 'testpass'"

#The -O flag below sets the user: createdb -O DBUSER DBNAME
createdb -O testuser testdb

psql -d testdb -U testuser -f $THISDIR/seed_db.sql

which in turn pulls in a simple .sql file to seed the dummy database:

-- Demo PostgreSQL Database initialisation

DROP TABLE IF EXISTS quickdemo CASCADE;
CREATE TABLE quickdemo(id INT, name VARCHAR(20), value INT);
INSERT INTO quickdemo VALUES(1,'This',12);
INSERT INTO quickdemo VALUES(2,'That',345);

Picking up on the recipe described in an earlier post (AutoStarting A Headless OpenRefine Server in MyBinder Using Repo2Docker and a start Config File), the database is autostarted using a start file:

#!/bin/bash
set -eux
PGDATA=${PGDATA:-/home/jovyan/srv/pgsql}
pg_ctl -D "$PGDATA" -l "$PGDATA/pg.log" start

exec "$@"

In a Jupyter notebook, we can connect to the database in several ways.

For example, we can connect directly using the the psycopg2 package:

import psycopg2

conn = psycopg2.connect("dbname='postgres'")
cur = conn.cursor()
cur.execute("SELECT datname from pg_database")

cur.fetchall()

Alternatively we can connect using something like ipython-sql magic, using a connection string that attaches us using a passwordless connection string as the default (jovyan) user and default connection details (we use default ports etc.): postgresql:///postgres

Or we can go to the other extreme, and use a connection string that connects us using the test user credentials, explicit host/port details, and a specified database: postgresql://testuser:testpass@localhost:5432/testdb

You can try it on MyBinder from here.

First Play With nbgallery

Having hacked together a bulk uploader for nbgallery and uploaded the TM351 notebooks to a test environment, I’m now in a position to start having a play with it.

All public notebooks are searchable, so how does the search fare?

The search box top right gets a little bit lost in the search results listing. It could be handy to at least print out the search string (“Searching for: …”) at the top of the results list, if not making the search box larger and in a more central location. The search results themselves take the form of the name / description/tag of each hit (i.e. the notebook metadata) along with a fragment showing how the search terms appeared in context within the notebook.

Some of my earlier experiments on notebook search here and here also show context.

A range of options are provided for ordering the results. Trending looks like it could be interesting (this is based on recent views, presumably), for example where students are searching notebooks relevant to the current week’s study.

That said, we can also display notebooks by tag, so it’s easy enough to display notebooks associated with a particular week’s study if we tag notebooks by study week:

(One thing I noticed zooming out on the page to grab the above screenshot is that the font size of the notebook titles doesn’t seem to respond to the zoom level; it would probably be worth checking to see if there are other accessibility issues.)

If we click through on a result, we see a list of related notebooks followed by a preview of the notebook. (nbgallery strips out all cell outputs on upload, so no cell outputs are displayed).

To search through the preview, we can use a normal browser in-page search (ctrl/cmd-F).

A range of options are provided to support community activity around a notebook for logged in users, including the ability to “star” a notebook, provide feedback or add a comment:

Logged in users can also click on the notebook tags to edit them.

Via the Further options menu, users can view various notebook metrics, email a notebook, or propose a change request:

The metrics available include number of views, runs, stars and the edit history.

If comments have been provided, the number indicator by the comment flag shows how many comments have been received, although this only appears on the notebook page. There doesn’t appear to be an indicator of how many comments are associated with a notebook on the search results page, nor did I spot a general “recent comments” feed anywhere.

When you post a comment, there is no indication that you have done so and the form remains in place. You need to close it manually. (Hitting “Post Comment” again just pops up a “can’t do that” alert on the grounds that you’re trying to post a duplicate comment.)

The comments themselves look as if they are an ordered (rather than threaded) list. It also looks like any signed in used can edit anybody else’s comment?

Users who aren’t signed in can download a notebook, but not star it, comment on it, modify the tags etc.

When I tried to add feedback, I got an error:

I’m not sure if there are settings I need to tweak to address that?

Logged in users can also run a notebook from nbgallery via an associated notebook server. (I’d prefer it if the Run in Jupyter flash wasn’t displayed if there isn’t a linked notebook server available for the logged in user.) For example, running a notebook server on  port 443 on the same host as nbgallery using the nbgallery notebook container:

docker run --rm -p 443:443 -e "NBGALLERY_URL=http://localhost:3000" -e "NBGALLERY_CONFIG_TOKEN=letmein" nbgallery/jupyter-alpine

starts a notebook server with the nbgallery extension pre-installed.

We can view the notebook server homepage on https://localhost:443 and log into it using the token-as-password letmein. Running the container in the way described above also gives permission for the nbgallery server running in on http://localhost:3000 to open notebooks via the notebook server.

Within nbgallery itself, a logged in user can associate one of more Jupyter environments via the user menu:

Each environment is given a name and the URL of the associated notebook server (in this case, https://localhost:443):

When a notebook server is associated with a user, notebooks can be opened from nbgallery within the notebook server.

If we create a new notebook in the linked notebook server, we can upload it to nbgallery, adding a title, description and optional tags as in a manual notebook upload step:

If we modify the notebook that is linked to one in the gallery (that is, that has been uploaded to the gallery or launched from the gallery), we can save a change to the gallery or submit a change request:

When uploading a new version, you can add tags but not additional comments such as a commit message:

Viewing the notebook details in nbgallery, we can see a summary of the change history:

We can also click through to a preview of each version of the notebook:

(The revision number doesn’t appear in the change history though, so it can be hard to reconcile a particular version with it’s appearance in the change history listing.)

A logged in user can make a change request to someone else’s notebooks by uploading a new version of them or by opening the notebook in the linked notebook server and submitting a change request:

When I submitted the change request, I got an error form in response, but it looks like the change request was made, as this listing of Change Requests from the user menu suggests:

An exclamation mark by the user menu also identifies that change requests are pending.

Viewing the change request provides a view over the current version of the notebook and the proposed changes. Notebooks can be viewed alongside each other or the diffs can be viewed:

The thumbs up/down indicators are used to accept or deny a change request, along with a brief comment:

Accepted changed notebooks are used to replace the current version of the notebook, and the change logged in the change history. Denied change requests are recorded as such in the change requests list, with a link to the version of the notebook containing the unsuccessfully proposed changes:

If feedback was provided, a comment icon identifies its presence and pops up the feedback in a tooltip when hovered over.

Health stats for linked and run notebooks are supposed to be available, but I couldn’t get those to work (as far as the health stat reports were concerned, the notebooks were never run no matter how many times I ran them), so maybe I’m missing something there in the setup too? [UPDATE: health settings run with a flag set: notebook instrumentation docs; specifically, -e NBGALLERY_ENABLE_INSTRUMENTATION=1 in the docker command line.]

I’m not sure how well this would work for managing TM351 notebooks compared to out current Github workflow (which I should write up somewhere). The error responses (whether they’re valid or not) for change requests and feedback are confusing, and I’m not sure how the feedback is handled if and when it works. Not being able to easily spot new comments easily (unless I’m missing something) could be a bit of a pain. That said, the proof would be in the testing-through-use, so I’ll maybe give it a week or two’s trial with some of my own notebook workflows.

In terms of use with students, it could be useful to provide a version of nbgallery with notebooks runnable by students without them having to log in to it. It could also be useful if notebooks could be run ‘inline’ from the notebook preview pages, for example using something like ThebeLab or Voila, particularly if a particular Binderhub repo / config could be specified in metadata somewhere.

Bulk Jupyter Notebook Uploads to nbgallery Using Selenium

I’ve recently started looking at nbgallery [repo], “an enterprise Jupyter Notebook sharing and collaboration platform” written in Ruby. The gallery provides a range of tools, including:

  • a Solr powered notebook search engine;
  • a notebook “health check” (I haven’t tried this yet);
  • integration with Jupyter notebooks, so you can run notebooks (I haven’t tried this yet).

One thing that seems to be lacking is the ability to bulk upload files (for example, contained in a zip file). I haven’t spotted an API either, or a Python wrapper to provide a de facto API. This makes a proper test over lots of notebooks tricky…

UPDATE: it looks like a Python API for nbgallery is on the way… nbgallery/nbgallery-api-python

The notebook upload is a two step process.

The first step requires selection of a notebook, and a required acknowledgement of rights:

The second provides and opportunity to submit a required title and non-null description and a (repeated) rights acknowledgement:

The upload process utilises a multi-part form.

To upload a notebook, a user needs to be logged in.

Creating a new user requires an email confirmation step, which means you need to set up email server details in the docker-compose.yml file. I used my OU ones:

EMAIL_USERNAME: $OU_USERNAME
EMAIL_PASSWORD: $OU_PWD
EMAIL_DOMAIN: open.ac.uk
EMAIL_ADDRESS: ${OUCU}@open.ac.uk
EMAIL_DEFAULT_URL_OPTIONS_HOST: localhost:3000
EMAIL_SERVER: smtp.office365.com

My usual approach for automating this sort of thing would be to have a go with mechanical soup or mechanize, but on a quick first attempt using both of those, I couldn’t get the scraper to work.

Instead, I took the opportunity to have a play with Selenium With Python, a Python wrapper for the Selenium web testing framework. This provides a set of Python functions for automating the launching of a web-browser (Chrome, Safari, Firefox, etc) and the automated clicking of pages viewed within that automated browser.

The full script I used can be found here.

The initialisation looks like this:

from selenium import webdriver

#Selenium package includes several utilitities
# for waiting until things are ready
#https://selenium-python.readthedocs.io/waits.html
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()

#Allow the driver to poll the DOM for up to 10s when
# trying to find an element
driver.implicitly_wait(10)

#We might also want to explicitly define wait conditions
# on a particular element
wait = WebDriverWait(driver, 10)

driver.get("http://localhost:3000/")

The login function looks something like this:

def nbgallery_login(driver, wait, user, pwd):
    ''' Login to nbgallery.
        Return once the login dialogue has disappeared.
    '''

    driver.find_element_by_id("gearDropdown").click()

    element = driver.find_element_by_id("user_email")
    element.click()

    element.clear()
    element.send_keys(user)

    element = driver.find_element_by_id("user_password")
    element.clear()
    element.send_keys(pwd)
    element.click()

    driver.find_element_by_xpath("//input[@value='Login']").click()

The first form script looks like this:

    #path is full path to file
    if not path.endswith('.ipynb'):
        print('Not a notebook (.ipynb) file? [{}]'.format(path))
        return

    #Part 1

    element = wait.until(EC.element_to_be_clickable((By.ID, 'uploadModalButton')))
    element.click()

    driver.find_element_by_id("uploadFile").send_keys(path);
    driver.find_element_by_xpath('//*[@id="uploadFileForm"]/div[3]/div/div/label/input').click()
    driver.find_element_by_id("uploadFileSubmit").click()

And the script to handle the second part of the form looks like this:

    #Part 2
    element = driver.find_element_by_id("stageTitle")
    element.click()

    #Is there notebook metadata we can search for title?
    if not title:
        title = path.split('/')[-1].replace('.ipynb','')
    element.clear()
    element.send_keys(title)

    element = driver.find_element_by_id("stageDescription")
    element.click()

    #Is there notebook metadata we can search for description?
    #Any other notebook metadata we could make use of here?
    element.clear()
    #Description needs to be not null
    desc= 'No description.' if not desc else desc
    element.send_keys(desc)

    element = driver.find_element_by_id("stageTags-tokenfield")
    element.click()
    #time.sleep(1)

    #Handle various tagging styles
    #Is there notebook metadata we can search for tags?
    tags = '' if not tags else tags
    if isinstance(tags, list):
        tags=','.join(tags)
    tags = tags if tags.endswith(',') else tags+','

    element.clear()
    element.send_keys(tags) #need the final comma to set it?

    if private:
        driver.find_element_by_id("stagePrivate").click()

    driver.find_element_by_xpath('//*[@id="stageForm"]/div[9]/div/div/label/input').click()
    driver.find_element_by_id("stageSubmit").click()

    #https://blog.codeship.com/get-selenium-to-wait-for-page-load/
    #Wait for new page to load
    wait.until(EC.staleness_of(driver.find_element_by_tag_name('html')))

Here’s how it plays out:

There’s still stuff that could be added — error trapping for duplicate notebooks, for example — but I think this is enough to let me upload a complete set of course notebooks and see how useful nbgallery is as a way of presenting notebooks.

If it is, and I get the Jupyter notebook server integration working, then I wonder: would it be useable as a notebook navigator in the TM351 VM? It’d probably need really tight integration with the notebook server so that when notebooks are saved they are also committed to the gallery?

Jupyter is Not Just Notebooks

Last week, I filled an hour in a department seminar showing ways in which we could use to Jupyter notebooks to support the creation and use of interactive educational materials.

I’ve no idea if it converted anyone to the cause.

I could have done any number of other talks — about the architecture of the Jupyter ecosysytem more widely (at least, insofar as I understand), or the way in which Jupyter makes sense for reproducible research and how it fits into a containerised / virtualised way of working.

Because Jupyter is not just about notebooks.

It’s also about string and glue.

Here’s something I suddenly grokked the other day whilst chatting to somebody about different ways of accessing applications that have a graphical UI… (on a desktop, on a desktop in a VM, via X11 (“what’s that?” they asked… sigh…), via a browser if is has an HTML UI, via novnc in a browser window if it doesn’t (albeit w/ borked audio support); note to self – try out this  novnc Jupyter extension.): if you wrap an application that has a command line interface using metakernel, you can access it in a notebook, or JupyterLab.

Obvious, right? But that means I can also access it via a web page using something like ThebeLab (or Juniper, or nbinteract), run via a container launched using Binderhub.

This is all tied up with a couple of the Big Ideas that underlies Jupyter: firstly, that it supports the read/write web. Secondly that it supports remote code execution (and as such enables the read/write/execute web).

So for example, one of the many metakernel based kernels is the gnuplot_kernel that lets you run Gnuplot commands from a notebook code cell and display the generated figure in a notebook. Here’s a forked version with the repo tweaked so it runs on MyBinder.

Using a gnuplot_kernel enabled Binder repo, we can now run Gnuplot commands via a web-browser using the ThebeLab Javascript package, for example, and display the result in the same web page. The container on the back is fired up in response to the first command issued from the page, which make take up to a minute or two, and will be used for future commands issued from the page in the same session.

Here’s what it looks like:

(The Gnuplot code is ripped from an example in the Gnuplot docs / gallery.)

The code seems to be repeated in the output, but I guess a tweak to the ThebeLab settings, or code, may fix that. Or maybe the kernel needs a tweak. But the proof of concept is there…

Here’s the code for the web page (image file, sorry… WordPress-com editor’n’sourcecode support sucks and I get fed up faffing around with tag brackets each time I re-edit the page):

That source code image does make a second point, though… Look closely, and compare the URLs in the two images above: I can edit an HTML file via the Jupyter notebook text file editor, and also render the page as a served HTML file.

So that’s a couple more things for my colleagues to say “ah, but it won’t work for my course because…”

Bring it on…

PS the code as a gist:

PPS Interested in keeping up to date with Jupyter news? Sign up to the Tracking Jupyter weekly newsletter.

Fragment – Jupyter For Edu

With more and more core components, as well as user contibutions, being added to the Jupyter framework, I’m starting to lose track of what’s possible. One of the things I might be useful for the OU, and Institute of Coding, context is to explore various architectural patterns that can be constructed in a Jupyter mediated environment that are particular useful for education.

In advance of getting a Github repo / wiki together to start that, here are a few fragments my my feeds, several of which have appeared in just the last couple of days:

Jupyter Enterprise Gateway Now a Top Level Jupyter Project

Via the Jupyter blog, I see the Jupyter Enterprise Gateway is now a top-level Jupyter project.

The Jupyter Enterprise Gateway “enables Jupyter Notebook to launch remote kernels in a distributed cluster“, which provides a handy separation between a notebook server (or Jupyterhub multi-user notebook server) and the kernel that a notebook runs against. For example, Jupyter Enterprise Gateway can be used to create kernels in a scaleable way using Kubernetes, or (I’m guessing…?) to do things like launch remote kernels running on a GPU cluster. From the docs it looks like Jupyter Enterprise Gateway  should work in a Jupyterhub context, although I can’t offhand find a simple howto / recipe for how to do that. (Presumably, Jupyterhub creates and launches user specific notebook server containers, and these then create and connect to arbitrary kernel running back-ends via the Jupyter Enterprise Gateway? Here’s a related issue I found.)

Running Notebook Cells One at a Time in a Terminal

The ever productive Doug Blank has a recipe for stepping through notebook cells in a terminal [code: nbplayer]. The player launches an IPython terminal that displays the first cell in the notebook and lets you step through them (executing or skipping the cell) one at a time. You can also run your own commands in between stepping through the notebook cells.

I can imagine using this to create a fixed set of steps for an activity that I want a student to work through, whilst giving them “free time” to explore the state of current execution environment, for example, or try out particular “given” functions with different parameters. This approach also provides a workaround for using notebook authored exercises in the terminal environment, which I know some colleagues favour over the notebook environment.

On my to do list is recast some of the activities from the new TM112 course to see how they feel using this execution model, and then compare that to the original activity and the activity run using the same notebook in a notebook environment.

Adding Multiple Student Users to a Jupyterhub Environment

Also via Doug Blank, a recipe for adding multiple users to a Jupyterhub environment using a form that allows you to simply add a list of user names: a more flexible way of adding accounts to Jupyterhub. User account details and random passwords are created automatically and then emailed to students.

To allow users to change passwords, e.g. on first run, I think the NotebookApp.allow_password_change=True notebook server parameter (Jupyter notebook – Config file and command line options) allows that?

The repo also shows a way of bundling nbviewer to allow users to “publish” HTML versions of their notebooks.

Doug also points to yuvipanda/jupyterhub-firstuseauthenticator, a first use authenticator for Jupyterhub that allows new users to create an account and then set a password on it. This could be really handy for workshops, where you want to allow uses to self-serve an environment that persists over a couple of workshop sessions, for example. (One thing we still need to do in the OU is get a Jupyterhub server up and running with persistent user storage; for TM112, we ran a temporary notebook server, which meant students couldn’t save and return to notebooks on the server – they’d have to download notebooks and then re-upload them into a new session if they wanted to return to working on a notebook they had modified. That said, the activity was designed as a “displosable” activity…)

Zip All Notebooks

This handy extension — nbzipprovides a button to zip and download a Jupyter notebook server folder.  If you’re working on a temporary notebook server, this provides and easy way of grabbing all the notebooks in one go. What might be even nicer would be to select a sub-folder, or selected set of files, using checkbox selectors? I’m not sure if there’s a complementary tool that will let you upload a zipped archive and unpack it in one go?

Fragment – Running Multiple Services, such as Jupyter Notebooks and a Postgres Database, in a Single Docker Container

Over the last couple of days, I’ve been fettling the build scripts for the TM351 VM, which typically uses vagrant to build a VirtualBox VM from a set of shell scripts, so they can be used to build a single Docker container that runs all the TM351 services, specifically Jupyter notebooks, OpenRefine, PostgreSQL and MongoDB.

Docker containers are typically constructed to a run a single service, with compositions of containers wired together using Docker Compose to create applications that deliver, or rely on, more than one running service. For example, in a previous post (Setting up a Containerised Desktop API server (MySQL + Apache / PHP 5) for the ergast Motor Racing Data API) I showed how to set up a couple of containers to work together, one running a MySQL database server, the other an http service that provided an API to the database.

So how to run multiple services in the same container? Docs on the Docker website suggest using supervisord to run multiple services in a single container, so here’s a fragment on how I’ve done that from my TM351 build.

To begin with, I’ve built the container up as a tiered set of containers, in a similar way to the way the stack of opinionated Jupyter notebook Docker containers are constructed:

#Define a stub to identify the images in this image stack
IMAGESTUB=psychemedia/tm361testm

# minimal
## Define a minimal container, eg a basic Linux container
## using whatever flavour of Linux we prefer
docker build --rm -t ${IMAGESTUB}-minimal-test ./minimal

# base
## The base container installs core packages
## The intention is to define a common build environment
## populated with packages likely to be common to many courses
docker build --rm --build-arg BASE=${IMAGESTUB}-minimal-test -t ${IMAGESTUB}-base-test ./base

#...

One of the things I’ve done to try to generalise the build steps is allow the name a base container to be used to bootstrap a new one by passing the name of the base image in via an optional variable (in the above case, --build-arg BASE=${IMAGESTUB}-minimal-test). Each Dockerfile in a build step directory uses the following construction to work out which image to use as the FROM basis:

#Set ARG values using --build-arg =
#Each ARG value can also have a default value
ARG BASE=psychemedia/ou-tm351-base-test
FROM ${BASE}

Using the same approach, I have used separate build tiers for the following components:

  • jupyter base: minimal Jupyter notebook install;
  • jupyter custom: add some customisation onto a pre-existing Jupyter notebook install;
  • openrefine: add the OpenRefine application; (note, we could just use BASE=ubuntu to create this a simple, standalone OpenRefine container);
  • postgres: create a seeded PostgreSQL database; note, this could be split into two: a base postgres tier and then a customisation that adds users, creates and seed databases etc;
  • mongodb: add in a seeded mongo database; again, the seeding could be added as an extra tier on a minimal database tier;
  • topup: a tier to add in anything I’ve missed without having to go back to rebuild from an earlier step…

The intention behind splitting out these tiers is that we might want to have a battle hardened OU postgres tier, for example, that could be shared between different courses. Alternatively, we might want to have tiers offering customisations for specific presentations of a course, whilst reusing several other fixed tiers intended to last out the life of the course.

By the by, it can be quite handy to poke inside an image once you’ve created it to check that everything is in the right place:

#Explore inside animage by entering it with a shell command
docker run -it --entrypoint=/bin/bash psychemedia/ou-tm351-jupyter-base-test -i

Once the services are in place, I add a final layer to the container that ensures supervisord is available and set up with an appropriate supervisord.conf configuration file:

##Dockerfile
#Final tier Dockerfile
ARG BASE=psychemedia/testpieces
FROM ${BASE}

USER root
RUN apt-get update && apt-get install -y supervisor

RUN mkdir -p /openrefine_projects  && chown oustudent:100 /openrefine_projects
VOLUME /openrefine_projects

RUN mkdir -p /notebooks  && chown oustudent:100 /notebooks
VOLUME /notebooks

RUN mkdir -p /var/log/supervisor
COPY monolithic_container_supervisord.conf /etc/supervisor/conf.d/supervisord.conf

EXPOSE 3334
EXPOSE 8888

CMD ["/usr/bin/supervisord"]

The supervisord.conf file is defined as follows:

##supervisord.conf
##We can check running processes under supervisord with: supervisorctl

[supervisord]
nodaemon=true
logfile=/dev/stdout
loglevel=trace
logfile_maxbytes=0
#The HOME envt needs setting to the correct USER
#otherwise jupyter throws: [Errno 13] Permission denied: '/root/.local'
#https://github.com/jupyter/notebook/issues/1719
environment=HOME=/home/oustudent

[program:jupyternotebook]
#Note the auth is a bit ropey on this atm!
command=/usr/local/bin/jupyter notebook --port=8888 --ip=0.0.0.0 --y --log-level=WARN --no-browser --allow-root --NotebookApp.password= --NotebookApp.token=
#The directory we want to start in
#(replaces jupyter notebook parameter: --notebook-dir=/notebooks)
directory=/notebooks
autostart=true
autorestart=true
startsecs=5
user=oustudent
stdout_logfile=NONE
stderr_logfile=NONE

[program:postgresql]
command=/usr/lib/postgresql/9.5/bin/postgres -D /var/lib/postgresql/9.5/main -c config_file=/etc/postgresql/9.5/main/postgresql.conf
user=postgres
autostart=true
autorestart=true
startsecs=5

[program:mongodb]
command=/usr/bin/mongod --dbpath=/var/lib/mongodb --port=27351
user=mongodb
autostart=true
autorestart=true
startsecs=5

[program:openrefine]
command=/opt/openrefine-3.0-beta/refine -p 3334 -i 0.0.0.0 -d /vagrant/openrefine_projects
user=oustudent
autostart=true
autorestart=true
startsecs=5
stdout_logfile=NONE
stderr_logfile=NONE

One thing I need to do better is to find a way to stage the construction of the supervisord.conf file, bearing in mind that multiple tiers may relate to the same servicel for example, I have a jupyter-base tier to create a minimal Jupyter notebook server and then a jupyter-base-custom tier that adds in specific customisations, such as branding and course related notebook extensions.

When the final container is built, the supervisord command is run and the multiple services started.

One other thing to note: we’re hoping to run TM351 environments on an internal OpenStack cluster. The current cluster only allows students to expose a single port, and port 80 at that, from the VM (IP addresses are in scant supply, and network security lockdowns are in place all over the place). The current VM exposes at least two http services: Jupyter notebooks and OpenRefine, so we need a proxy in place if we are to expose them both via a single port. Helpfully, the nbserverproxy Jupyter extension (as described in Exposing Multiple Services Via a Single http Port Using Jupyter nbserverproxy), allows us to do just that. One thing to note, though – I had to enable it via the same user that launches the notebook server in the suoervisord.conf settings:

##Dockerfile fragment

RUN $PIP install nbserverproxy

USER oustudent
RUN jupyter serverextension enable --py nbserverproxy
USER root

To run the VM, I can call something like:

docker run -p 8899:8888 -d psychemedia/tm351dockermonotest

and then to access the additional services, I can browse to e.g. localhost:8899/proxy/3334/ to see the OpenRefine application.

PS in case you’re wondering why I syndicated this through RBloggers too, the same recipe will work if you’re using Jupyter notebooks with an R kernel, rather than the default IPython one.