Rapid ipywidgets Prototyping Using Third Party Javascript Packages in Jupyter Notebooks With jp_proxy_widget

Just before the break, I came across a rather entrancing visualisation of Jean Michel Jarre’s Oxygene album in the form of an animated spectrogram.

Time is along the horizontal x-axis, and frequency along the vertical y-axis. The bright colours show the presence, and volume, of each frequency as the track plays out.

Such visualisations can help you hear-by-seeing the structure of the sound as the music plays. So I wondered… could I get something like that working in a Jupyter notebook….?

And it seems I can, using the rather handy jp_proxy_widget that provides a way of easily loading jQueryUI components as well as the requests.js module to load and run Javascript widgets.

Via this StackOverflow answer, which shows how to embed a simple audio visualisation into a Jupyter notebook using the Wavesurfer.js package, I note that Wavesurfer.js also supports spectrograms. The example page docs are a bit ropey, but a look at the source code and the plugin docs revealed what I needed to know…

#%pip install --upgrade ipywidgets
#!jupyter nbextension enable --py widgetsnbextension

#%pip install jp_proxy_widget

import jp_proxy_widget

widget = jp_proxy_widget.JSProxyWidget()

js = "https://unpkg.com/wavesurfer.js"
js2="https://unpkg.com/wavesurfer.js/dist/plugin/wavesurfer.spectrogram.min.js"
url = "https://ia902606.us.archive.org/35/items/shortpoetry_047_librivox/song_cjrg_teasdale_64kb.mp3"

widget.load_js_files([js, js2])

widget.js_init("""
element.empty();

element.wavesurfer = WaveSurfer.create({
    container: element[0],
    waveColor: 'violet',
        progressColor: 'purple',
        loaderColor: 'purple',
        cursorColor: 'navy',
        minPxPerSec: 100,
        scrollParent: true,
        plugins: [
        WaveSurfer.spectrogram.create({
            wavesurfer: element.wavesurfer,
            container: element[0],
            fftSamples:512,
            labels: true
        })
    ]
});

element.wavesurfer.load(url);

element.wavesurfer.on('ready', function () {
    element.wavesurfer.play();
});
""", url=url)

widget

#It would probably make sense to wire up these commands to upywidgets buttons...
#widget.element.wavesurfer.pause()
#widget.element.wavesurfer.play(0)

The code is also saved as a gist here and can be run on MyBinder (the dependencies should be automatically installed):

Here’s what it looks like (It may take a moment or two to load when you run the code cell…)

It doesn’t seem to work in JupyterLab though…

It looks like the full ipywidgets machinery is supported, so we can issue start and stop commands from the Python notebook envioronment that control the widget Javascript.

So now I’m wondering what other Javascript apps are out there that might be interesting in a Jupyter notebook context, and how easy it’d be to get them running…?

It might also be interesting to try to construct an audio file within the notebook and then visualise it using the widget.

Accessing MyBinder Kernels Remotely from IPython Magic and from VS Code

One of the issues facing us as a distance learning organisation is how to support the computing needs of distance learning students, on a cross-platform basis, and over a wide range of computer specifications.

The approach we have taken for our TM351 Data Management and Analysis course is to ship students a Virtualbox virtual machine. This mostly works. But in some cases it doesn’t. So in the absence of an institutional online hosted notebook, I started wondering about whether we could freeload on MyBinder as a way of helping students run the course software.

I’ve started working on an image here though it’s still divergent from the shipped VM (I need to sort out things like database seeding, and maybe fix some of the package versions…), but that leaves open the question of how students would then access the environment.

One solution would be to let students work on MyBinder directly, but this raises the question of how to get the course notebooks into the Binder environment (the notebooks are in a repo, but its a private repo) and out again at the end of a session. One solution might be to use a Jupyter github extension but this would require students setting up a Github repository, installing and configuring the extension, remember to sync (unless auto-save-and-commit is available, or could be added to the extendion) and so on…

An alternative solution would be to find a way of treating MyBinder like an Enterprise Gateway server, launching a kernel via MyBinder from a local notebook server extension. But I don’t know how to do that.

Some fragments I have had laying around for a bit were the first fumblings towards a Python Mybinder client API, based on the Sage Cell client for running a chunk of code on a remote server… So I wondered whether I could do another pass over that code to ceate some IPython magic that let you create a MyBinder environment from a repo and then execute code against it from a magicked code cell. Proof of concept code for that is here: innovationOUtside/ipython_binder_magic.

One problem is that the connection seems to time out quite quickly. The code is really hacky and could probably be rebuilt from functions in the Jupyter client package, but making sense of that code is beyond my limited cut-and-paste abilities. But: it does offer a minimal working demo of what such a thing could be like. At a push, a student could install a minimal Jupyter server on their machine, install the magic, and then write notebooks using magic to run the code against a Binder kernel, albeit one that keeps dying. Whilst this would be inconvenient, it’s not a complete catastrophe because the notebook would be bing saved to the student’s local machine.

Another alternative struck me today when I say that Yuvi Panda had posted to the official Jupyter blog a recipe on how to connect to a remote Jupyterhub from Visual Studio Code. The mechanics are quite simple — I posted a demo here about how to connect from VS Code to a remote Jupyter server running on Digital Ocean, and the same approach works for connecting to out VM notebook server, if you tweak the VM notebook server’s access permissions — but it requires you to have a token. Yuvi’s post says how to find that from a remote JupyterHub server, but can we find the token for a MyBinder server?

If you open your browser’s developer tools and watch the network traffic as you launch a MyBinder server, then you can indeed see the URL used to launch the environment, along with the necessary token:

But that’s a bit of a faff if we want students to launch a Binder environment, watch the newtrok traff, grab the token and then use that to create a connection to the Binder environment from VS Code.

Searching the contents of pages from a running Binder environment, it seems that the token is hidden in the page:

And it’s not that hard to find… it’s in the link from the Jupyter log. The URL needs a tiny bit of editing (cut the /tree path element) but then the URL is good to go as the kernel connection URL in VS Code:

Then you can start working on your notebook in VS Code (open a new notebook from the settings menu), executing the code against the MyBinder environment.

You can also see the notebooks listed in the remote MyBinder environment.

So that’s another way… and now it’s got me thinking… how hard would it be to write a VS Code extension to launch a MyBinder container and then connect to it?

Ps by the by, I notice that developer tools in Firefox become increasingly useful with the Firefox 71 release in the form of a websocket inspector.

This lets you inspect traffic sent across a webseocket connection. For example, if we force a page reload on a running Jupyter notebook, we can see a websocket connection:

We can the click on that connection and monitor the messages being passed over it…

I thought this might help me debug / improve my Binder magic, but it hasn’t. The notebook looks like it sends an empty ping as a heartbeat (as per the docs), but if I try to send an empyt message from the magic it closes the connection? Instead, I send a message to the hearbeat channel…

OER Text Publishing Workflows Rooted on OpenLearn OU-XML Via Github, CircleCI and Github Pages Using Jupytext and nbSphinx

Slowly, slowly, my recipes are coming together for generating markdown from OU-XML sourced, variously, from modules on the OU VLE and units on OpenLearn.

The code needs a couple more passes through but at some point I should be able to pull a simple CLI together (hopefully! I’m still manually running some handcranked steps spread across a couple of notebooks at the moment:-(

So… where am I currently at?

First up, I have chunks of code that can generate markdown from OU-XML, sort of. The XSLT is still a bit ropey (lists are occasionally broken[FIXED], for example, repeating the text) and the image link reconciliation for OpenLearn images doesn’t work, although I may have a way of accessing the images directly from the OU-XML image paths. (There could still be image rights issues if I was archiving the images in my own repo, which perhaps counts as a redistribution step…?)

The markdown can be handled in various ways.

Firstly, it can be edited/viewed as markdown. Chatting to colleague Jon Rosewell the other day, I realised that JupyterLab provides one way of editing and previewing markdown: in the JupyterLab file browser, right click on an .md file and you should be able to preview it:

There is also a WYSIWYG editor extension for JupyterLab (which looks like it may enter core at some point): Jupyter Scribe / jupyterlab-richtext-mode.

If you have Jupytext installed, then clicking on an .md file in the notebook tree browser opens the document into a Jupyter notebook editor, where markdown and code cells can be edited separately. An .ipynb file can then be downloaded from the notebook editor, and/or Jupytext can be used to pair markdwon and .ipynb docs from the notebook file menu if you install the Jupytext notebook extension. Jupytext can also be called on the command line to convert .md to .ipynb files. If the markdown file is prefaced with Jupytext YAML metadata (i.e. if the markdown file is a “Jupytext markdown” file, then notebook metadata (which includes cell tags, for example) is preserved in the markdown and can be used for round-tripping between markdown and notebook document formats. (This is handy for RISE slideshows, for example; the slide tags are preserved in the markdown so you can edit a RISE slideshow as a markdown document and then present it via Jupytext and a notebook server.)

In a couple of simple tests I tried, the .ipynb generated from markdown using Jupytext seemed to open okay in the new Netflix Polynote notebook application (early review). This is handy, because Polynote has a WYSIWYG markdown editor… So for anyone who gripes that notebooks are too hard because writing markdown is too hard, this provides an alternative.

I also note that the wrong code language has been selected (presumably the default in the absence of any specified language? So I need to make sure I do tag code cells with a default language somehow… I wonder if Jupytext can do that?).

Having a bunch of markdown documents, or notebooks derived from markdown documents using Jupytext is one thing, providing as it does a set of documents that can be easily edited and interacted with, albeit in the context of a Jupyter notebook server.

However, we can also generate HTML websites based on those documents using tools such as Jupyter Book and nbsphinx. Jupyter Book uses a Jekyll engine to build HTML sites, which is a bit of a pain (I noted a demo here that used CircleCI to build a site from notebooks and md using Jupyter Book) but the nbsphinx Python package that extends the (also pip installable) Sphinx documentation engine is a much easier propostion…

As a proof-of-concept demo, the ouseful-oer/openlearn-learntocode repo contains markdown files generated from the OpenLearn Learn to code for data analysis course.

Whenever the master branch on the repository is updated, CircleCI kicks in and uses nbsphinx to build a documentation site from the markdown docs and pushes them to the repository’s gh-pages branch, which makes the site available via Github Pages: “Learn To Code…” on Github Pages.

What this means is that I should be able to edit the markdown directly via the Github website, or using an online editor such as prose.io connected to my Github account, commit changes and then let CircleCI rebuild the site for me.

(I’m pretty sure I haven’t set things up as efficiently I could in terms of CI; what I would like is for only things that have changed to be rebuilt, but as it is, everything gets rebuilt (although the installed Python environment should be cached?) Hints / tips / suggestions about improving my CircleCI config.yml file would be much appreciated…

At the moment, nbsphinx is set up to run .md files through Jupytext to convert them to .ipynb, which nbsphinx then eventually churns back to HTML. I’ve also disabled code cell execution in the current set up (which means the routing through .ipynb in this instance is superfluous – the site could just be generated from the .md files). But the principle is there for a flick of a switch meaning that the code cells could be executed and their outputs immortalised in the punlished site HTML.

So… what next?

I need to automate the prodcution of the root index file (index.rst) so that the table of contents are built from the parsed OU-XML. I think Sphinx handles navigation menu nesting based on header levels, which is a bit of a pain in the demo site. (It would be nice if there were a Sphinx trick that lets me increase the de facto heading level for files in a subdirectory so that in the navigation sidebar menu each week’s content could be given its own heading and then the week’s pages listed as child pages within that. Is there such a trick?)

Slowly, slowly, I can see the pieces coming together. A tool chain looks possible that will:

  • download OU-XML;
  • generate markdown;
  • optionally, cast markdown as notebook files (via jupytext);
  • publish markdown / (un)executed notebooks (via nbsphinx).

A couple of next steps I want tack on to the end as and when I get a chance and top up my creative energy levels: firstly, a routine that will wrap the published pages in an electron app for different platforms (Mac, Windows, Linux); secondly, publishing the content to different formats (for example, PDF, ebook) as well as HMTL.

I also need to find a way of adding interaction — as Jupyter Book does — integrating something like ThebeLab or nbinteract buttons to support in-page code execution (ThebeLab) and interactive widgets (nbinteract).

News: Arise All Ye Notebooks

A handful of brief news-y items…

Netflix Polynote Notebooks

Netflix have announced a new notebook candidate, Polynote [code], capable of running polyglot notebooks (scala, Python, SQL) with fixed cell ordering, variable inspector and WYSIWYG text authoring.

At the moment you need to download and install it yourself (no official Docker container yet?) but from the currently incomplete installation docs, it looks like there may be other routes on the way…

The UI is clean, and whilst perhaps slightly more cluttered than vanilla Jupyter notebooks it’s easier on the eye (to my mind) than JupyterLab.

Cells are code cells or text cells, the text cells offering a WYSIWYG editor view:

One of the things I note is the filetype: .ipynb.

Code cells are sensitive to syntax, with a code completion prompt:

I really struggle with code complete. I can’t write import pandas as pd RETURN because that renders as import pandas as pandas. Instead I have to enter import pandas as pd ESC RETURN.

Running cells are indicated with a green sidebar to the cell (you can get a similar effect in Jupyter notebooks with the multi-outputs extension):

I couldn’t see how to connect to a SQL database, nor did I seem to get an error from running a presumably badly formed SQL query?

The execution model is supposed to enforce linear execution, but I could insert a cell after and unrun cell and get an error from it (so the execution model is not run all cells above either literally, or based on analysis of the programme abstract syntax tree?)

There is a variable inspector, although rather than showing or previewing cell state, you just get a listing of variables and then need to click through to view the value:

I couldn’t see how to render a matplotibl plot:

The IPython magic used in Jupyter notebooks throws an error, for example:

This did make me realise that cell lines are line numbered on one side and there’s a highlight shown on the other side which line errored. I couldn’t seem to click through to raise a more detailed error trace though?

On the topic of charts, if you have a Vega chart spec, you can paste that into a Vega spec type code cell and it will render the chart when you run the cell:

The developers also seem to be engaging with the “open” thing…

Take it for a spin today by heading over to our website or directly to the code and let us know what you think! Take a look at our currently open issues and to see what we’re planning, and, of course, PRs are always welcome!

Streamlit.io

Streamlit.io is another new not-really-a-notebook alternative, pip installable and locally runnable. The model appears to be that you create a Python file and run the streamlit server against that file. Trying to print("Hello World") doesn’t appear to have any effect — so that’s a black mark as far as I’n concerned! — but the display is otherwise very clean.

Hovering top right will raise the context menu (if it’s timed-out itself closed) showing if the source file has recently been saved and not rerun, or allowing you to always rerun the execution each time the file is saved.

I’m not sure if there’s any cacheing of steps that are slow to run if associated code hasn’t changed up to that point in a newly saved file.

Ah, it looks there is…

… and the docs go into further detail, with the use of decorators to support cacheing the output of particular functions.

I need to play with this a bit more, but it looks to me like it’d make for a really interesting VS Code extension. It also has the feel of Scripted Forms, as was, (a range of widgets are available in streamlit as UI components), and R’s Shiny application framework. It also feels like something I guess you could do in Jupyterlab, perhaps with a bit of Jupytext wiring.

In a similar vein,  a package called Handout also appeared a few weeks ago, offering the promise of “[t]urn[ing] Python scripts into handouts with Markdown comments and inline figures”. I didnlt spot it in the streamlit UI, but it’d be useful to be able to save or export the rendered streamlit document eg as an HTML file, or even as an ipynb notebook, with run cells, rather than having to save it via the browser save menu?

Wolfram Notebooks

Wolfram have just announced their new, “free” Wolfram Notebooks service, the next step in the evolution of Wolfram Cloud (announcement review], I guess? (I scare-quote “free because, well, Wolfram; you’d also need to carefully think about the “open” and “portable” aspects…

*Actually, I did try to have a play, but I went to the various sites labelled as “Wolfram Notebooks” and I couldn’t actually find a 1-click get started (at all, let alone, for “free”) link button anywhere obvious?

Ah… here we go:

[W]e’ve set it up so that anyone can make their own copy of a published notebook, and start using it; all they need is a (free) Cloud Basic account. And people with Cloud Basic accounts can even publish their own notebooks in the cloud, though if they want to store them long term they’ll have to upgrade their account.

Fragment: Indexing Local Jupyter Notebooks for Search

It’s been some time since I last explored this (eg here and here, and as far as I know know other solutions have appeared since, but a question still remains as to how to effectively search over a set of notebooks.

Partial alternative solutions maybe worth noting include:

  • nbscan for searching over notebooks from the command-line;
  • nbgallery bakes in Solr/sunspot; it’d be really nice if the nbgallery search tools could be easily decoupled so the search could be added to an arbitrary Jupyter notebook, or JupyterHub, server as an extension…);
  • this simple search engine with automcomplete by Simon Willison.

There is also the lunr based search of Jupyter Book (related issue). (The more recent elasticlunr Javascript search engine also looks interesting… perhaps even more so than lunr.js…)

One of the things I often wondered about in respect of building a notebook search engine index would be how to crawl / index freshly updated notebooks.

One way would presumably be to regularly crawl the directory path in which notebooks live looking for notebook files that have a changed timestamp compared to the last time they were indexed; another might be to set up some sort of watcher on the operating system that calls the indexer whenever it spots a file being updated (maybe something like fswatch?).

Another way might be to use something like the pgcontents contents manager to save (or process) notebooks into a search engine index database. (For other examples of Jupyter notebook content managers, see this Tracking Jupyter round-up. I wonder, is there a sqlite content manager that can save notebooks directly into SQLite? Would the pgcontents extension handle that with little or no modification, other thn to the supplied database connection string?) If notebooks were saved as notebooks to disk, and into a database for indexing as part of the search engine, how would the indexed notebook also be linked back to the notebook on disk so it could be linked to via search results?

Thinks: how is nbgallery architected? Where are notebooks saved to? How is the Solr search engine index managed?

More generally, I wonder: are there any Python based, simple full-text search engines with local fielsystem crawlers/monitors/indexers out there?

PS Other search engines to have a look at:

Rescuing Python Module Code From Cluttered Jupyter Notebooks

One of the ways I use Jupyter notebooks is to write stream-of-consciousness code.

Whilst the code doesn’t include formal tests (I never got into the swing of test-driven development, partly because I’m forever changing my mind about what I need a particular function to do!) the notebooks do contain a lot of implicit testing as I tried to build up a function (see Programming in Jupyter Notebooks, via the Heavy Metal Umlaut for an example of some of the ways I use notebooks to iterate the development of code either across several cells or within a single cell).

The resulting notebooks tend to be messy in a way that makes it hard to reuse code contained in them easily. In particular, there are lots of parameter setting cells and code fragment cells where I test specific things out, and then there are cells containing functions that pull together separate pieces to perform a particular task.

So for example, in the fragment below, there is a cell where I’m trying something out, a cell where I put that thing into a function, and a cell where I test the function:

My notebooks also tend to include lots of markdown cells where I try to explain what I want to achieve, or things I still need to do. Some of these usefully document completed functions, others are more note form that relate to the development of an idea or act as notes-to-self.

As the notebooks get more cluttered, it gets harder to use them to perform a particular task. I can’t load the notebook into another notebook as a module because as well as loading the functions in, all the directly run code cells will be loaded in and then executed.

Jupytext comes partly to the rescue here. As described in Exploring Jupytext – Creating Simple Python Modules Via a Notebook UI, we can add active-ipynb tags to a cell that instruct Jupytext where code cells should be executable:

In the case of the active-ipynb tag, if we generate a Python file from a notebook using Jupytext, the active-ipynb tagged code cells will be commented out. But that can still make for a quite a messy Python file.

Via Marc Wouts comes this alternative solution for using an nbconvert template to strip out code cells that are tagged active-ipynb; I’ve also tweaked the template to omit cell count numbers and only include markdown cells that are tagged docs.


echo """{%- extends 'python.tpl' -%}

{% block in_prompt %}
{% endblock in_prompt %}

{% block markdowncell scoped %}
{%- if \"docs\" in cell.metadata.tags -%}
{{ super() }}
{%- else -%}
{%- endif -%}
{% endblock markdowncell %}

{% block input_group -%}
{%- if \"active-ipynb\" in cell.metadata.tags  -%}
{%- else -%}
{{ super() }}
{%- endif -%}
{% endblock input_group %}""" > clean_py_file.tpl

Running nbconvert using this template over a notebook:

jupyter nbconvert "My Messy Notebook.ipynb" --to script --template clean_py_file.tpl

generates a My Messy Notebook.py file that includes code from cells not tagged as active-ipynb, along with commented out markdown from docs tagged markdown cells, that provides a much cleaner python module file.

With this workflow, I can revisit my old messy notebooks, tag the cells appropriately, and recover useful module code from them.

If I only ever generate (and never edit by hand) the module/Python files, then I can maintain the code from the messy notebook as long as I remember to generate the Python file from the notebook via the clean_py_file.tpl template. Ideally, this would be done via a Jupyter content manager hook so that whenever the notebook was saved, as per Jupytext paired files, the clean Python / module file would be automatically generated from it.

Just by the by, we can load in Python files that contain spaces in the filename as modules into another Python file or notebook using the formulation:

tsl = __import__('TSL Timing Screen Screenshot and Data Grabber')

and then call functions via tsl.myFunction() in the normal way. If running in a Jupyter notebook setting (which may be a notebook UI loaded from a .py file under Jupytext) where the notebook includes the magics:

%load_ext autoreload
%autoreload 2

then whenever a function from a loaded module file is called, the module (and any changes to it since it was last loaded) are reloaded.

PS thinks… it’d be quite handy to have a simple script that would autotag all notebook cells as active-ipynb; or perhaps just have another template that ignores all code cells not tagged with something like active-py or module-py. That would probably make tag gardening in old notebooks a bit easier…

Fragment – Jupyter Kernels / MyBinder as a Remote Code Execution Sandbox for Moodle

Although I don’t know for sure, I suspect that administrators of computing infrastructure in educational establishments are wary of requests from academics for compute services that allow students to run arbitrary code.

One of the main reasons why an educator would want to support this is that becuase setting up an environment can be hard: if you want a student to focus on writing code that makes use of particular packages, you probably don’t want them engaging in arcane sys admin practices and spending all them time trying to install those packages in the first place.

For the IT department, the thought of running arbitrary code that could be produced either by novices or deliberately malicious users is likely to raise several well-founded concerns: how do we stop users using the code environment to attack the server or network the code is running on; how do we stop folk from running code on out servers that could be used to attack external sites; and how do we control the resource requirements (storage, compute, network) when mistakes happen and folk try to repeatedly download the internet to our server.

One way of making hosted compute available to students is to execute code within isolated sandboxed environments that you can park in a safe area of the network and monitor closely.

In our Moodle VLE, the Moodle CodeRunner environment is used to allow students to run small fragments of code within just such an environment when completing interactive quiz questions. (I provide a quick review of the Moodle CodeRunner plugin in post [A] Quick First Look At Moodle CodeRunner.)

Presumably, someone somewhere has done a security audit and decided that the sandboxed code execution environment is a safe one and signed off on its use.

Another approach, described in this fragment on Jupyter Notebooks and Moodle, the SageCell filter for Moodle, allows you to run code against an external (stateless) SageCell server:

<?php
/**
 * SageCell filter for Moodle 3.4+
 *
 *  This filter will replace any Sage code in [sage]...[/sage]
 *  with a Ajax code from http://sagecell.sagemath.org
 *
 * @package    filter_sagecell
 * @copyright  2015-2018 Eugene Modlo, Sergey Semerikov
 * @license    http://www.gnu.org/copyleft/gpl.html GNU GPL v3 or later
 */

defined('MOODLE_INTERNAL') || die();

/**
 * Automatic SageCell embedding filter class.
 *
 * @package    filter_sagecell
 * @copyright  2015-2016 Eugene Modlo, Sergey Semerikov
 * @license    http://www.gnu.org/copyleft/gpl.html GNU GPL v3 or later
 */
class filter_sagecell extends moodle_text_filter {

    /**
     * Check text for Sage code in [sage]...[/sage].
     *
     * @param string $text
     * @param array $options
     * @return string
     */
    public function filter($text, array $options = array()) {

        if (!is_string($text) or empty($text)) {
            // Non string data can not be filtered anyway.
            return $text;
        }

        if (strpos($text, '[sage]') === false) {
            // Performance shortcut - if there is no </a> tag, nothing can match.
            return $text;
        }

        $newtext = $text; // Fullclone is slow and not needed here.

        $search = '/\[sage](.+?)\[\/sage]/is';
        $newtext = preg_replace_callback($search, 'filter_sagecell_callback', $newtext);

        if (is_null($newtext) or $newtext === $text) {
            // Error or not filtered.
            return $text;
        }

        return $newtext;
    }

}

/**
 * Replace Sage code with embedded SageCell, if possible.
 *
 * @param array $sagecode
 * @return string
 */
function filter_sagecell_callback($sagecode) {

    // SageCell code from [sage]...[/sage].
    $output = $sagecode[1];
    $output = str_ireplace("", "\n", $output);
    $output = str_ireplace("

", "\n", $output);
    $output = str_ireplace("
", "\n", $output);
    $output = str_ireplace("
", "\n", $output);
    $output = str_ireplace("
", "\n", $output);
    $output = str_ireplace("&nbsp;", "\x20", $output);
    $output = str_ireplace("\xc2\xa0", "\x20", $output);
    $output = clean_text($output);
    $output = str_ireplace("&lt;", "", $output);

    $id = uniqid("");

    $output = "" .
    "" .
        "sagecell.makeSagecell({inputLocation: \"#" . $id . "\"," .
        "evalButtonText: \"Evaluate\"," .
        "autoeval: true," .
        "hide: [\"evalButton\", \"editor\", \"messages\", \"permalink\", \"language\"] }" .
    ");" .
    "" .
    "
<div id="">". $output. "</div>
";

    return $output;
}

This looks to me like the SageCell Moodle filter essentially rewrites a [sage]...[/sage] delimited code block within a Moodle environment as a Javascript backed SageCell form and then lets users run the code embedded in the form against the remote server. This sort of thing could presumably be used to support interactive, executable code activities within a Moodle hosted web page, for example.

As I remarked previously, it’s not hard to imagine doing something similar to provide a [mybinder repository="..."]...[/mybinder]​ filter that could use a Javascript library such as ThebeLab or Juniper to provide a similar style of interaction backed by a MyBinder launched repository, though minor tweaks may be required around those packages to handle stateless rather than stateful transactions if repeated calls are made to the server.

Going back to the CodeRunner plugin (as described here):

[i]nternally CodeRunner is designed to support multiple sandboxes, implemented as subclasses of the abstract class qtype_coderunner_sandbox – see sandbox.php. Sandboxes are essentially plugins to CodeRunner. Several different ones have been used over the years but the only current ones are the jobe sandbox (file jobesandbox.php) and the ideone sandbox. The latter interfaces to the Sphere On-line judge server but is now more-or-less defunct. Both of those sandboxes run as services. CodeRunner can support multiple sandboxes at the same time and questions can be configured to select a particular sandbox (if desired). By default the first available sandbox that supports the language required by the question is used.

So could we use a MyBinder launched Jupyter server to provide sandboxed code execution?

One advantage of this would be that we could define a Jupyter environment that students could use on their own machines, or that we could host via a hosted notebook server, and that same environment could be used for CodeRunner style assessment.

Another advantage would be that if we want to run student created arbitrary code for teaching activities as well as CodeRunner based assessment activities, we’d only need to sign off on one sandboxed code execution environment rather than several.

So what’s required?

It’s years since I had used PHP, but I thought I’d have a go at creating a simple Python client that would let me:

  • start a MyBinder server against a specified Github repo;
  • start a kernel;
  • run a small code sample in the kernel and get a code execution response back.

Cribbing heavily from juniper.js and this rather handy sagecell-client.py, I came up with a hacky recipe that works a minimal proof of concept here: mybinder_py_client-ipynb.

I think this is stateful, in that we execute several code blocks one after the other and exploit state in previous calls to the same kernel. It would probably also make sense to have a call that forces a new kernel for each code execution call, as well as providing a recipe for killing a kernel.

The next step in trying to use this approach for CodeRunner sandbox would presumably be to try to create a simple PHP based MyBinder client; then the next step would be to use that in a CodeRunner sandbox subclass.

But that’s out of scope for me atm…

Please let me know in the comments if you have a go at this… or know of any other Moodle / Jupyter integrations…