Fragment – OpenLearn Jupyter Books Remix

Following on from my quandary the other day, I stuck with the simple markdown converter route for OU-XML and now have the first glimmerings of a proof of concept conversion route from OU-XML to Jupyter books.

As an example, I grabbed the OU-XML for the Learn to Code for Data Analysis course on OpenLearn, used XSLT to transform it to markdown, and then published the markdown to Github using Jupyter Book.

There’s an example here (repo).

You’ll notice quite a few issues with it:

  • there are no images (the OU-XML uses file paths to images that don’t resolve; there is a way round this — grab links from the HTML version of the course and swap those in for the OU-XML image links — but it’s fiddly; I have code that does it for scrapes of courses from the VLE which will work in the OpenLearn setting, but I should tidy the code so it works nicely for the VLE and OpenLearn so it’s still on the to do list…);
  • code is extracted but so are bits of code output, code error messages etc. The way code / code outputs are handled in OU-XML is a bit hit and miss (crap in, crap out etc) but it’s okay to start working from;
  • I haven’t done proper XLST handlers for all the OU-XML elements yet; if you see something wrong, let me know and I’ll try to fix it.

The markdown produced by the XLST can be used by Jupytext to generate notebooks (or if you have things set up, just open an .md file in the Jupyter notebook UI and it will open as a notebook). One thing the Jupytext commandline converter doesn’t seem to do is add a (dummy) kernelspec attribute to the generated .ipynb file, so if I try to bulk convert .md files to .ipynb and then use those with Jupyter Book, Jupyter Book throws an error. If I open the notebook against a kernel and save it, it works fine with Jupyter Book.

The reason for converting things to .ipynb format is that Jupyter Book can make those pages interactive. I’ve popped a couple of valid  .ipynb rather than .md pages into the demo to illustrate how it (sort of!) works:

The notebook generated pages are enriched with a notebook download button and a ThebeLab button. Clicking the ThebeLab button makes the code cells interactive via ThebeLab/MyBinder.

Unfortunately, most of the code is a bit borked (eg missing dependencies, or missing local files) but you can still edit the code in a ThebeLab’d cell and execute it.

One other issue is that Jupyter Book fires up a separate MyBinder kernel for each notebook / page, which means you can’t carry state between pages. It’d be quite handy if all the ThebeLab panels were connected to the same IPython instance. (@mdpacer has an example of a related sort of common state conversation between a console and a notebook here.) That way, you could run examples in one chapter and carry the state through to later chapters. (Yes, I KNOW there are issues doing that. But if I didn’t raise it, folk would complain that you CAN’T carry state across pages…!)

So… what next?

For content in markdown format, I think you can just edit the files in the book’s _build folder and the book page should be updated automatically (erm, I should check this…). If you enable Jupytext to dual notebooks and markdown, if you edit one of the notebook files in a Jupyter notebook environment, it should create a .md file that could be added to the _build directory and thence update the book page.

In part, the above all speaks to things like Fragment – ROER: Reproducible Open Educational Resources and OERs in Practice: Re-use With Modification where we take openly licensed educational materials and get them into a format that is raw enough for us to be able to edit them relatively easily. We can also get them into an environment where we can start repurposing and modifying them (for example, we could run and edit the notebooks in MyBinder) as well as publishing any modifications we make simply by recommitting any updated files to the Github repo.

The above approach also speaks to Build Your Own Learning Tools (BYOT). For example, now I have a way to relatively easily dump an OpenLearn course into an environment where I can start swapping static bits of content out for interactive bits I may want to develop in their place.

PS For anyone in the OU, I can grab OU course materials that have an OU-XML basis from the VLE, so if there’s a course you’d like me to try it with, let me know…

Fragment – Jupyter Book Electron App

Noting an experiment by Ines Montani in creating an electron app wrapped version of the spacy course/python course creator template, I wondered how easy it would be to wrap a static Jekyll / Jupyter Book created site, such as one generated from an OU-XML feedstock, as an electron app.

The reason for doing this? With the book bundled as an electron app, you could download the app and run it, standalone, to view the book, with no web server required. (The web server used to serve the book is bundled inside the electron app.)

A quick search around turned up an electron appifier script (thmsbfft/electron-wrap), which did the job nicely:

cd to the folder that contains the files, and run:

bash <(curl -s https://raw.githubusercontent.com/thmsbfft/electron-wrap/master/wrap.sh)

If all the js required to handle interactives used in the book is available locally, the book should work, in a fully featured way, even when offline.

(I also noted a script for wrapping a website in an electron app — jiahaog/nativefier — which may be interesting… It can build for various platforms, which would be handy, but it only wraps a website, it doesn’t grab the files and serve them locally inside the app…? See also Google Chrome app mode.)

So what next?

A couple of things I want to try, though probably won’t have a chance to do so for the next couple of weeks at least… (I need a break: holiday, with no computer access… yeah!)

First, getting Jupyter Book python interactions working locally. At the moment, Python code execution is handled by ThebeLab launching a MyBinder container and using a Jupyter server created there to provide execution kernels. But ThebeLab can also run against a local kernel, for example using the connection settings:

 
    {
      bootstrap: true,
      kernelOptions: {
        name: "python3",
        serverSettings: {
          "baseUrl": "http://127.0.0.1:8888",
          "token": "test-secret"
        }
      },
    }
  

and starting the Jupyter server with settings something along the lines of:

jupyter notebook --NotebookApp.token=test-secret --NotebookApp.allow_origin='https://thebelab.readthedocs.io'

(The allow_origin setting is used to allow traffic from another IP domain, such as the domain a ThebeLab page is being served from.)

My primary use case for the local server is so that I can reimagine the delivery of notebooks in the TM351 VM as a Jupyter Book, presenting the notebooks in book form rather than vanilla notebook form and allowing code execution from within the book against the Jupyter server running locally in the VM.

But if Jupyter Book can run against a local kernel, then could we also distribute, and run, a Jupyter server / kernel within the electron app? This post on Building a deployable Python-Electron App suggests a possible way, using pyinstaller to “freeze (package) Python applications into stand-alone executables, under Windows, GNU/Linux, Mac OS X, FreeBSD, Solaris and AIX”. Said executables could then be added to the electron app and provide the Python environment within the app, which means it should run in a standalone mode, without the need for the user to have Python, a Jupyter server environment, or the required Python environment installed elsewhere on their machine? Download the electron book app and it should just run; and the code should run against the locally bundled Jupyter kernel; and it should all work offline, too, with no other installation requirements or side-effects. And in a cross-platform way. Perfect for OUr students…

An alternative, perhaps easier, approach might be to bundle a conda environment in the electron app. In this case, snakestagram could help automate the selection of an appropriate environment for different platforms (h/t @betatim for clarifying what snakestagram does).

(I’m not sure we could go the whole way and put all the TM351 environment into an electron app — we call on Postgres, MongodDB and OpenRefine too — but in the course materials a lot of the Postgres query activities could be replaced with a SQLite3 backend.)

Looking around, a lot of electron apps seem to require the additional installation of Python environments on host that are then called from the electron app (example). Finding a robust recipe for bundling Python environments within electron apps would be really useful I think?

There are a couple of Jupyter electron apps out there already, aside from nteract, (which runs against an external kernel). For example, jupyterlab/jupyterlab_app, which also runs against an external kernel, and AgrawalAmey/nnfl-app, which looks like it downloads Anaconda on first run (I’m not sure if the installation is “outside” the app, onto host filesystem? Or does it add it to a filesystem within the electron app context?). The nnfl-app also has support for assignment handling and assignment uploads to a complementary server (about: Jupyter In Classroom).

Finally, I wonder if Jupyter Book could run Python code within the browser environment that the electron app essentially provides using Pyodide, “the Python scientific stack, compiled to WebAssembly” that gives Python access within the browser?

Fragment – TM351 Notebooks Jupyter Books In the VM and Via An Electron App

Another fragment because I keep trying to tidy the code so it’s ready to share and then keep getting distracted…

Previously, I’ve already posted crude proof of concept demos transforming OU-XML to markdown and rendering it via Jupyter Book and wrapping a Jupyter Book in an electron app shell.

I’ve managed to get a Jupyter Book generated from TM351 notebooks working under a local server inside the TM351 VM executing code against the notebook server in the VM, and also managed to connect the electron app version to the notebook server inside the VM. (The TM351 VM already runs a webserver, so I could just define the Jupyter Book to live down a particular directory path when I built it, and then pop it into a corresponding subdirectory in the root directory served by the webserver.)

The following not very informative demo shows me running a query against the database inside the VM twice, adding a row with a particular PK (which works) then trying again (which correctly fails because that PK is already in use).

The magic we use inside the VM for previewing a schema diagram of the current db also seems to work.

The VM solution currently requires two sorts of trust for executing code against the VM kernel from the electron app to work correctly: firstly, that the server is on a specified port (we default to a mapping 35180 when distributing the VM and this port is baked into the electron app book pages); secondly, that no-one attacks the VM — the notebook server is running pretty wide open:

jupyter notebook --port=8888 --ip=0.0.0.0 --no-browser --notebook-dir=/vagrant/notebooks --allow-root --NotebookApp.allow_origin='*' --NotebookApp.allow_remote_access=True  --NotebookApp.token=notverysecret

One of the nice things about the Jupyter Book view is that it bakes in search. I’ve had a couple of aborted looks at notebook search before (eg Initial Sketch – Searching Jupyter Notebooks Using lunr and More Thoughts On Jupyter Notebook Search as well as via nbgallery) and the Jupyter Book solution uses lunr which was one of the things I’d previously used to sketch a VM notebook search engine.

At the moment, the search part works but the links are broken – a template needs tweaking somewhere,  I think, or in the short term we could workaround with a hacky fix by post-processing the generated pages and fixing the broken URLs with a string replace.

One of the problems with lunr is is that the index is baked into a JSON file. datasette is now able to serve feeds off the current state of a SQLite3 database, even if the database is updated by another process, which makes me wonder: is one way of getting a “live” notebook search into the VM would to monitor the notebook directory, update a SQLite database with current notebook contents whenever the a notebook is added or updated (we could archive deleted notebooks in the database and just mark them as deleted without actually deleting them) and then use datasette to generate and serve a current JSON index to lunr whenever a page is refreshed?

So what does this electron and Jupyter Book stuff mean anyway?

  1. we can wrap TM351 notebooks in a Jupyter Book style UI and serve it inside the VM against the notebook server running in the VM. This provides an alternative UI and some interesting pedagogical differences. For example, in the traditional notebook UI, students are free to run, edit, add and delete cells, whereas in the book UI they can only run and edit them. Also, in the book, if the page is refreshed after any edits, the original code content will be re-presented and the edited version lost;
  2. we can wrap the TM351 notebooks in a Jupyter Book style UI and wrap that in an electron container. A student can then view the book UI on the desktop and if the VM is available on the expected port, execute code against it. If the VM is not available, the students can still read the notebooks, if not actually execute the code. (One thing we could do is publish a version of the book with all the code cells run and showing their outputs…)
  3. the electron app could also be built to tell ThebeLab (which handles the remote code execution) to launch a kernel somewhere else, such as on MyBinder or on an OU managed on-demand Binder kernel server. (I guess we could also add electron menu options that might allow us to select a ThebeLab execution environment?) This latter approach could be interesting: it means we could provide compute backend support to students *only* when they need it in quite a natural way. The notebook content will always be available and readable, even offline, although in the offline case code would not be (remotely) executable.
  4. distributing the notebooks *as an app* means we can distribute content we donlt want to share openly and in public whilst still freeloading on the possibility of executing code in the appified Jupyter Book on a MyBinder compute backend server;
  5. on my to do list (still fettlin’ the code) is to provide a combined Jupyter Book view that includes content from the VLE as well as the notebooks. With both sets of content in the same Jupyter book, we could then readily run a search over VLE and notebook content at the same time (though forum search would still be missing). At the moment, search is an issue becuase: 1) VLE search sucks and it only works over the VLE hosted content, which *does not* include the notebooks; 2) we don’t offer any notebook search tools (other than hints of things students could do themseleves to search notebooks).

I’m not sure I’m going to get everything as “safe” to leave as I’d hoped to before a holiday away from’t compootah, so I just hope its stable enough for me to remember how to finish off all the various unfinished bits in a week or two!

Binder Base Boxes, Several Ways…

A couple of weeks ago, Chris Holdgraf published a handy tip on the Jupyter Discourse site about how to embed custom github content in a Binder link with nbgitpuller.

One of the problems with (features of…) MyBinder is that if you make a change to a repo, even if it’s just a change to the README, it will spawn a rebuild of the Docker image built from the repo the next time the repo is launched onto MyBinder.

With the recent announcement of the Binder Federation, whereby there are multiple clusters (currently two…) onto which MyBinder launch requests are mapped, if each cluster maintains its own Docker image hub, this could mean that with N clusters available, your next N launches may all require a rebuild if each launch request is mapped to a different cluster.

So how does nbgitpuller help? If you install nbgitpuller into a Binderised repository, you can launch a container on MyBinder with a git-pull? argument. This will grab the contents of a specified repository into a notebook server environment before presenting you with the notebook homepage.

What this means is that we can construct a MyBinder URL that will:

  • launch a container built from one repo; and
  • populate it with files pulled from another.

The advantage of this is that you can create one repo with a complex set of build requirements and build a MyBinder image from that once and once only. If you also maintain a second repository with notebook files, or a package definition, with frequent changes, but run it in a Binderised container launched from the “fixed” build repo, you won’t need to rebuild the container each time: just launch from the pre-built one and then synch the changed content in from the other repo.

To pull the contents of a repo http://github.com/USER/REPO into a MyBinder container built from a particular binder-base-boxes branch, use a MyBinder URL of the form:

https://mybinder.org/v2/gh/ouseful-demos/binder-base-boxes/BASEBOXBRANCH/?urlpath=git-pull?repo=https://github.com/USER/REPO

To pull the contents from a particular branch of a repo http://github.com/USER/REPO/tree/BRANCH, use a MyBinder URL of the form:

https://mybinder.org/v2/gh/ouseful-demos/binder-base-boxes/BASEBOXBRANCH/?urlpath=git-pull?repo=https://github.com/USER/REPO%26amp%3Bbranch=BRANCH

Note the escaping on the & conjunction between the repo and branch arguments that keeps it inside the scope of the git-pull?repo phrase.

To pull the contents from a particular branch of a repo http://github.com/USER/REPO/tree/BRANCH and launch into a particular notebook, use a MyBinder URL of the form:

https://mybinder.org/v2/gh/ouseful-demos/binder-base-boxes/BASEBOXBRANCH/?urlpath=git-pull?repo=https://github.com/USER/REPO%26amp%3Bbranch=BRANCH%26amp%3BsubPath=FILENAME.ipynb

You can see several examples in the various branches of https://github.com/ouseful-demos/binder-base-boxes.

See Feeding a MyBinder Container Built From One Github Repository With the Contents of Another for an earlier review of this approach (which I have to admit, I’d forgotten I’d posted when I started this post!).

On my to do list is to try to add a tab to the nbgitpuller/link generator to simplify the process of link creation. But in addition to a helper tool, is there a convention we might adopt to make it clearer when we are using this sort of split build/content repo approach?

Github conventionally uses the gh-pages branch as a “reserved” branch for constructing Github Pages docs related to a particular repo. Could we take a similar approach for defining a “Binder build” branch?

The binder/ directory in a repo can be used to partition Binder build requirements in a repo, but there are a couple of problems associated with this:

  • a maintainer may not want to have the binder/ directory cluttering their package repo;
  • any updates to the repo will force a rebuild of the Binder image next time the repo is run on a particular Binder node. (With Binder federation, if there are N hosts in the federation, after updating a repo, is it possible that my next N attempts to run the repo on MyBinder may require a rebuild if I am directed to a different host each time?)

If by convention something like a binder-build branch was used to contain the build requirements for a repo, then the process for calling a build (by default) could be simplified.

Eg rather than having something like:

https://mybinder.org/v2/gh/colinleach/binder-box/master/?urlpath=git-pull?repo=https://github.com/colinleach/astro-Jupyter

we would have something like:

https://mybinder.org/v2/gh/colinleach/astro-Jupyter/binder-build/?urlpath=git-pull?repo=https://github.com/colinleach/astro-Jupyter

which could simplify to something that defaults to a build from binder-build branch (the “build” branch) and nbgitpull from master (the “content” branch):

https://mybinder.org/v2/gh/colinleach/astro-Jupyter?binder-build=True

Complications could be added to support changing the build branch, the nbgitpull branch, the commit/ID of a particular build, etc?

It might overly complicate things further, but I could also imagine:

  • automatically injecting nbgitpuller into the Binder image and enabling it;
  • providing some sort of directive support so that if the content directory has a setup.py file the package from that content directory is installed.

Binder Buildpacks

As well as defining dynamically constructed Binder base boxes built from one repo and used to provide an environment within which to run the contents of another, there is a second sense in which we might define Binder base boxes and that is to consider the base environment on which repo2docker constructs a Binder image.

In the nbgitpuller approach, I am treating the Binder base box (sense 1) as the environment that the git pulled content runs in. In the buildpack appraoch, the Binder base box (sense 2) is the image that repo2docker uses to bootstrap the Binder image build process. Binder base box sense 1 = Binder base box sense 2 + Binder repo build process. Maybe it’d make more sense to swap those senses, so sense 2 builds on sense 1?!

This approach is discussed in the repo2docker issue #487 Make it possible to configure the base image with an example implementation in pangeo-stacks/pull/27. The implementation allows users to create a Dockerfile in which they specify a required base Docker image upon which the normal apt.txt, environment.yml, requirements.txt and postBuild steps can be applied.

The Dockerfile FROM statement takes the form:

FROM yuvipanda/pangeo-base-notebook-onbuild:2019.04.15-4

and then other build files (requirements.txt etc) are declared as normal.

The -onbuild component marks out the base image as one that should be built on (I think). I’m not sure how the date component applies (or whether it is required or optional). I’m not sure if the base box itself also needs some custom configuration? I think an example of the code use to build it is in the base-notebook directory of this repo: https://github.com/yuvipanda/pangeo-stacks .

Summary

Installing nbgitpuller into a Binderised repo allows us to pull the contents of a second Github repository into the first. This means we can build a complex environment from one repository once and pull regularly updated content from another repo into it without needing a rebuild step. Using the -onbuild approach, Binderhub can use repo2docker to build a Binder image from a user defined base image and then apply normal build steps to it. This means that optimised base boxes can be defined on which additional customisations can be layered. This can also make development of Binder boxes more efficient by starting rebuilds further up the image layer stack by building on top of prebuilt boxes rather than having build images from scratch.

Zero to Notebook With notebook.js +ThebeLab, Inspired By nbgrader

Whilst at the nbgrader workshop in Edinburgh a couple of weeks ago, a couple of things jumped out at me. Firstly, Jess Hamrick’s design requirement that students should be able to complete ngrader assignments without need to install the nbgrader package or any of its extensions. This is completely reasonable — we want to make it as easy as possible for students to complete an assignment which means minimising the chances of anything going wrong. Secondly, a comment that the ability to lock cells in an assignment notebook is available as an undocumented feature in nbgrader.

The ability to lock cells is an interesting one when it comes to using notebooks for assessment (it also has interesting pedagogical implications when delivering teaching materials in a notebook form). One thing that might make sense in some situations would be to prevent students, by default, from editing any cells not associated with a submission. (That is, the only editable cells should be cells that are “markable” code or free text cells.) This would provide more of a form driven “exam script” style presentation. Furthermore, if we are autograding a notebook, we want to make sure that students don’t mess up the autograding; neither do we want students to put their answers in cells we can’t detect as requiring grading.

I was aware of a freeze Jupyter notebook extension that allows cells to take on various states. Code cells, for example, can be read-only (executable, but the code cannot be changed) or frozen (code be neither altered nor executed), and markdown cells can be read-only (viewable as markdown but non-editable) or frozen (neither viewable as markdown nor editable). But could this also be achieved without an extension, as per the design requirement?

Interestingly, it seems that recent notebook releases do support various cell presentation controls that can be invoked using particular metadata elements.

In particular, the deletable and editable metadata elements seem to be recognised by the Jupyter notebook server (deletable example, editable example); they take effect when set to false.

From the nbformat docs, the deletable metadata element will prevent the deletion of the cell if set to False, but the editable element doesn’t appear to be documented?

To see the effect of setting these metadata elements, you can enable the Edit Metadata controls from the notebook View - Cell Toolbar menu:

edit the required field:

and then try to edit or delete the cell:

Within nbgrader itself, the nbgrader lockcells preprocessor sets cell.metadata['deletable'] = False and in some cases also sets cell.metadata['editable'] = False. [I didn’t spot in the nbgrader` docs a clear description of what’s set to what and how?]

Using native notebook controls, then, we should be able to make certain cells undeletable and uneditable without the need to install any extensions. Students can still add new cells into the notebook but there will be no nbgrader metadata associated with the cell; as such, these cells would presumably be ignored by the autograder and not highlighted for manual grading. (There is a recently added nbgrader task style that seems to allow submissions over multiple cells but I haven’t had a chance to explore this yet.) It’s maybe also worth noting that if all cells in a notebook are tagged with some metadata as part of the assignment creation step, even if just a cell ID, then nbgrader would be able to detect any cells newly added to the notebook by a student.

Within an assignment notebook, certain code and metadata cells can be designated as Read Only. Originally, the semantics of this was such that the source of those cells was recorded into a database when the assignment was created; during autograding nbgrader checked whether the source of the student’s version of the cell has changed and if it had, the changed content would be replaced by the version in the database to undo any student changes. With native notebook, metadata driven support of non-editable cells, cells are now read-only unless the student changes the metadata field.

Making all assessment notebook cells undeletable and making all cells other than cells where we require students to provide an answer uneditable gives us a certain amount of control over the notebook document. But can we take this further? Providing locked down environments where only “assessable” cells are editable made me start to wonder about whether it would be possible to put an nbgrader assignment into a Jupyter Book like environment, with ThebeLab enabled code cells (and free text answer / markdown cells?) that students could use to run and test their code and then somehow export their answers.

Rather than letting students edit an assignment notebook freely, we would provide them with a truly fixed environment that could execute only clearly identified (user editable) code cells against a known/predefined environment. Ideally, we’d then also provide a means to save the code the student had created in the code cells.

Note: in other automated code assessment tools, the ability to save code may not be strictly necessary. For example, the Moodle CodeRunner question type provides automated assessment “inline”: students write code in a web form, execute it within a remote computational environment, and then receive an automated grading against predefined tests. nbgrader doesn’t operate in this way, but it might be interesting if it could…

As a first attempt at providing a “fixed guidance, editable answer” view, I had a play with nbpreview, a single page standalone web app that lets you upload a notebook and then preview it using notebook.js, a Javascript notebook previewer. In particular, I made a minor tweak to notebook.js  to simplify the way in which code cells were rendered, and a tweak to nbpreview that provided support for executing the code against a MyBinder container launched using ThebeLab. (See also this related issue thread.)

The resulting demo, which you can find here, allows you to upload a notebook .ipynb file and preview it in a normal way.

Clicking the Activate button will make the code cells editable and runnable, via ThebeLab, against a specified MyBinder launched environment.

The Download button is a proof of concept that will export the contents of all the code cells in a raw form. My intention for this, in the nbgrader context, is to be able to download a JSON object file that associates the nbgrader cell ID. The notebook.js package really needs some further revision so that the nbgrader cell ID metadata is passed as an ID attribute of each code block into the previewed web page; further modification to identify “assessable” markdown cell content is also required, along with a tweak to the ThebeLab.js package so that assessable and identifiable free text / markdown cells are converted into editable text area form elements when the Activate button is clicked. User modified free text cells should also be exportable / downloadable.

As to how downloaded cell content might be returned to nbgrader, I can think of a several possibilities.

One way would be to create a notebook template document that can be repopulated with content from the answer download.

A similar effect could also be achieved by simply rewriting the contents of each assessable cell in the assessment notebook with the contents of the associated (similarly ID’d) element in the downloaded JSON file. Submission could then proceed using the current nbgrader submission approach.

Another approach might be to modify the autograder so that answer cells from student submissions (either pulled from the downloaded JSON answers file, or extracted from notebook answer cells) are parsed into an “answers” table in the database;  autograding could then be run against templated code cells and content pulled from the database’s “answers” table.

An advantage of this approach is that an “assessment form” UI such as the one above might then allow answers to be uploaded directly into the database rather than downloaded as an answer object file.

The database mediated approach might then also support a CodeRunner style mode of operation in which assessments containing only automatically graded elements can be autograded directly in response to a student hitting a “Grade me now” button on the assessment page web app.

Fragment: On Online Courses….

Rehashing something I posted to an internal forum because I haven’t posted here for what feels like aaagggeeessss….

I think our mode of delivery — narrative based courses presented primarily as written texts, interspersed with other forms of media, as well as dialog in the form self-test/supported open learning/tutor at your side SAQs and exercises — is an engaging and powerful one.

I personally think that a medium that supports embedded rich media and interactive activities provides many opportunities for us as educators to engage learners with more than just a static written text (although such activities may or may not actually make a positive impact on learning, and may affect it negatively, either directly, because the activities are not supportive of the learning, or because the material around the interactive is geared towards the activity creating an opportunity cost against using that material for other purposes).

Michel mentions platforms like Codio and (the new to me) Stepik, which in many ways are just an evolution of platforms that allow you to easily create and publish quite traditional e-learning (remember that?!) quizzes. It’s not too hard to roll your own course, either: https://course.spacy.io/ was a single person’s DIY effort, but it’s also produced a framework from which you can create your own course.

Something I’m noticing more and more are pages that embed read/write interactions as well as presenting the document as a whole via a personal read/write web style interface. You could argue these are just an iteration on a personal wiki, but I think calling them cell based, notebook style interfaces is more apposite. (OpenCreate was a cell based authoring environment, at least in the iteration I saw.)

The spacy course provides one example of inlining free text editable areas in a course. (The course also mixes linear text with slideshow expositions, which I thinks work nicely. It would be even more powerful if there were an area beneath the slide show where you could enter, and save, your own notes / commentary.)

Applications like Observeable provide you with in-browser documents editable at the cell level (click on the vertical ellipsis in the sidebar of a cell to make it editable). These interfaces also support code-based activities fully supported within the browser. (The spacy course executes code against a remote code environment; Observeable allows js code editing that is executed within the browser; Iodide is a similar, but more generic, framework from Mozilla; here’s a demo document; you can click the Explore button in the top right corner to edit, and preview, the source code. Another part of the same project,  Pyodide, brings a fully blown scientific Python stack into the browser using Webkit. Epiphany.pub is a very new (also solo) project demoing inline editing of docs that make use of Pyodide; click on a cell tool icon (in the toolbar at the right side of each cell) to edit the cell).

Something that I think these new read/write interfaces offer is the opportunity for students to take ownership of these documents and make marks on them, much as they might write comments or underlines on a print study guide. (Yes, I know about OU Annotate, but it’s not the nicest of experiences…)

Annotated documents can then be saved to personal file spaces. In the case of epiphany.pub, it wasn’t working yet when I tried yesterday (which is not to say that it might not have already been fixed today), but the model seems to be that you log in with something like Github and it saves the file there… This means that the site publisher: a) doesn’t really have to worry about managing user accounts, perhaps other than in a very simple, account secret token keeping, way; b) doesn’t “own” your document, it just hosts the editor; c) doesn’t have to pay for any storage for files edited using the editor. This pattern seems to be becoming more and more prevalent; you log in to a service with credentials and permissions that allow the service you are logging in to to store and retrieve stuff using the service whose credentials you logged in with. It’s becoming popular because me as a service provider can create a web app with minimal resource – not much more than hosting for a single page web app.

Just by the by, making annotations on top of documents is also becoming easier. eg the RISE slideshow in Jupyter notebooks, which lets you specify certain cells in a notebook to use as part of a presentation, also supports the ability to draw over a slide https://www.youtube.com/watch?v=Gx2TnIdt0hw&t=28m30s .  Things like Jupyter graffiti also go a bit further in terms of allowing you to create not-really screencasts that are actually replays over a live notebook with support for free annotation over the top (the replay element means the user can step into the “screencast” and take over control of the notebook themselves and go off in a different direction from the screenplay). At the moment Jupyter Graffiti is intended for instructors to make tutorials over the top of notebooks, but I wonder how elements of it might be tweaked or co-opted as an annotation tool for students…

One thing to note about the above is that the tech is starting to get there, but the understanding of how to use it, let alone a culture of using it, is still some way away.

Things like Noteable are fine, but they provide a base experience. An argument we keep having in TM351 is the extent to which we provide students with a vanilla notebook experience, or a rich environment built on Jupyter with loads of extensions and loads of exploitation of those extensions in the way we write the notebooks. Another way might be to find extensions that students can use to enrich their experience of our vanilla notebooks, but that requires skilling up the students so that they can make most effective use of the medium on their own terms.