Notes on the JupyterLab Notebook HTML DOM Model, Part 7: Extension User Settings

In the previous posts in this series, I have reviewed the JupyterLab notebook DOM model and described an extension that can be used to set class tags on cell DOM elements based on cell tag metadata. In this post, we will review how JupyterLab extension user settings can be used to enable and disable cell toolbar buttons and identify which tags should be considered for “tag to class” processing.

JupyterLab extension user settings can be used to provide a persistent way of storing user settings. For an installed extension, settings are accessed from the JupyterLab Settings > Advanced Settings Editor menu:

The settings for a particular extension can be modified from the settings editor for that extension. Settings include options for setting boolean values via checkboxes, strings and numeric values. (I’m not sure if you can also raise dialogues for things like colour pickers, as you used to be able to do in the classic notebook extensions configurator?)

The defualt settings are defined in the schema/plugin.json JSON file (note this can also be edited manually via the JSON Settings Editor).

Each user setting is defined in a structured way via the properties element in the schema/plugin.json file. For example, here is a string definition and a boolean defintion:

"properties": {
    "tagprefix": {
      "type": "string",
      "title": "Tag prefix",
      "description": "Prefix to identify tag2class tags",
      "default": "iou-"
    "activity_button": {
      "type": "boolean",
      "title": "Activity button",
      "description": "Whether to display the \"Activity [A]\" button",
      "default": false

Numerics can be defined as a “number” type, as for example this setting taken from the jupyterlab-cell-flash extension:

"duration": {
      "type": "number",
      "title": "Duration (seconds)",
      "description": "The duration of the flash effect animation (in seconds)",
      "default": 0.5

The extension-cookiecutter-ts gives an example of how to access and read settings data as part of an extension’s activation, but is not overly helpful when trying to make sense of how to use that data the scope of another widget (it might be obvious to someone who speaks Typescript; I’m at the level of riffing from examples provided with zero knowledge other than inspection of the example generated file). However, the ocordes/jupyterlab_notebookbuttons extension did provide a useful crib, and which forms the basis of the following example.

To start with, we make sure that the ISettingRegistry is available. We can then pass it to one or more registered extensions widgets of our ou creation that might want to make use of the settings:

import {
} from '@jupyterlab/application';

import { ISettingRegistry } from '@jupyterlab/settingregistry';

const plugin: JupyterFrontEndPlugin<void> = {
  id: 'jupyterlab-empinken:plugin',
  autoStart: true,
  optional: [ISettingRegistry]

 * Activate the extension.
 * @param app Main application object
 function activate(
   app: JupyterFrontEnd,
   settingRegistry: ISettingRegistry | null): void 

  // Pass the settingRegistry as a parameter to the new widget extensions
  // app.docRegistry.addWidgetExtension('Notebook', new ClassDemoExtension(settingRegistry));
  // app.docRegistry.addWidgetExtension('Notebook', new ButtonExtension(settingRegistry));

export default plugin;

So how do we then need to define our widgets? To begin with, we need to make sure we accept the settings parameter; a constructor then calls a function that actually reads in the settings. The settings themselves are read in from a settings file addressed behind the scenes using the from the plugin named object we created in the previous step :

export class ButtonExtension
  implements DocumentRegistry.IWidgetExtension<NotebookPanel, INotebookModel>
  // Settings crib via:

  settings: ISettingRegistry.ISettings;

  constructor(protected settingRegistry: ISettingRegistry) {
    // read the settings

  setup_settings(): void {
      .then(([settings]) => {
        console.log('reading settings');
        this.settings = settings;
        // update of settings is done automatically
        //settings.changed.connect(() => {
        //  this.update_settings(settings);
      .catch((reason: Error) => {

We can now access values from the settings, for example:

    // Create a tag prefix
    // Retrieve the tagprefix value from the extension settings file
    const tag_prefix =  this.settings.get('tagprefix').composite.toString();

    //Suppose we have four tag types 
    // that we want to toggle the behaviour of
    let tag_types: string[] = ['activity', 'solution', 'learner', 'tutor'];

    // Create an array to hold button definitions
    var tagButtonSpec = new Array();

    // Iterate over the tag_types
    // Note the use of *of* rather than *in*
    for (let typ of tag_types) {
      console.log("Setup on "+typ);
      tagButtonSpec[typ] = new Array();

      // Set array values based on settings retrieved from settings file
      tagButtonSpec[typ]['enabled'] = this.settings.get(typ.toString() +'_button').composite;
      // We can then test on: 
      //if (tagButtonSpec[typ]['enabled']) {}

The setting of the tag_prefix value provides us with a way to limit which families of tags we might want to use as the basis for cell DOM classing.

If we already have the keys of an array defined, we can iterate over them in the following way:

// Get the key values from the array
let typ: keyof typeof tagButtonSpec;

// We can then iterate through them
for (typ in tagButtonSpec) {
  // The typ is actually a symbol object of some sort;
  // if we need the string value, use: typ.toString()
  tagButtonSpec[typ]['render'] = this.settings.get(typ.toString() +'_render').composite;

We can use the setting state to decide whether or not we want to render a toggle button on the UI, or render the coloured background for a particular tag.

User environments can have custom settings applied via entries in an overrides.json settings file (eg <sys.prefix>/local/share/jupyter/lab/settings/overrides.json; see the docs).

In the next post in this series, we’ll look at how we might be able to use settings to when setting the colours used to define the backgrounds for each empinken style tag.

Notes on the JupyterLab Notebook HTML DOM Model, Part 8: Setting CSS Variable Values from an Extension

In the previous post in this series, I looked at how we can make use of an extension’s user settings to access persistent state that we might use to modify the behaviour of an extension. In this post, I’ll look at how we can use extension settings to tune CSS properties such as a the background colour we might apply to tag classed cells based on a crib from jtpio/jupyterlab-cell-flash.

We can define a color property in our settings file in the following way (presumably we could also specifiy a colour name or use any other appropriate CSS colour formulation):

    "activity_color": {
      "type": "string",
      "title": "Color",
      "description": "The base color for the activity background",
      "default": "rgba(173, 216, 230, 1.0)"

The original class CSS was defined literally:

.iou-solution-node {
    background-color: lightgreen !important;

However, we can parameterise the values and then set new values onto the CSS parameters based on extension settings:

:root {
    --iou-activity-bg-color: lightblue;

.iou-activity-node {
    background-color: var(--iou-activity-bg-color) !important;

How, then, do we expose these values as a setting? The following example loads the settings in as part of our extension activation, identifies a desired class color value from the settings file, and then uses it to set the value of the CSS variable.

function activate(
   app: JupyterFrontEnd, settingRegistry: ISettingRegistry | null): void {

    // Example of how to read settings from cookiecutter
    if (settingRegistry) {
        .then(settings => {
          console.log('jupyterlab-empinken settings loaded:', settings.composite);
          // Handle the background colours
          // The document object seems to be magically available?
          const root = document.documentElement;
          const updateSettings = (): void => {
            const color = settings.get('activity_color').composite as string;
  '--iou-activity-bg-color', color);
          // We can auto update the color
        .catch(reason => {
          console.error('Failed to load settings for jupyterlab-empinken.', reason);

In the control panel, we then have a dialog of the form:

A couple of things to note about that; first, it would be neater if we could have a colour-picker; secondly, the extensions panel seems overly aggressive on save, saving after every keystroke if you change a string value, which means you need to type very, very, slowly, which sucks in terms of UX because it makes you think you’ve broken something.

We can now set colours for the different backgrounds via the extensions settings panel. In addition, the background colours should update immediately in the notebook if we change the setting.

In the next post in this series, I will review how the various components can all work together to give us a JupyterLab flavoured version of the classic notebook empinken extension.

Notes on the JupyterLab Notebook HTML DOM Model, Part 8.5: A Reproducible Development Process

Across the course of the earlier posts in this series, I’ve been trying to place a series of stepping stones that demonstrate in a reasonably complete way how to create a simple JupyterLab notebook extension. At this point, I was hoping to share a repo that would demo the extension in a MyBinder environment, and act as an endpoint for installing a pre-built extension via a pip install (and then it’d be a simple step to pop it onto PyPi (at this point, I still donlt know how to build a pre-built extension). The next step would then be to also figure out a way of bundling the extension into a JupyterLite distribution. But. there’s always a but…

Here’s what my local file browser tells me my working directory looks like:

The directory was created by the Github App synch-ing a new repo I’d created from Github. This approach has the benefit that I don’t have to worry about any of the Github settings – I create a simple repo on Github then let the app figure out how to set up a directory I can work from on my local machine.

Here’s what Github tells me the structure looks like:

That is, top level is a jupyterlab_empinken directory. But that is not what I was expecting to see, because the ./Github/jupyerlab_empinken directory is supposed to map onto the top-level of the repo. And looking at my desktop file browser, that’s what looks like should be visible. I also note that lots of files are not added to the repo, and are ignored in the Github app file viewer: seems likes there’s a new .gitignore file in there somewhere too…

The nested directory structure visible in the Github repo is easily explained: when I ran the cookiecutter to seed the project (cookiecutter, I did so in the ./Github/jupyerlab_empinken directory. I don’t recall if the cookiecutter had the option to install the files into the ./ directory, but it would be useful if it did.

As for how I managed to map the files to give the view in the local file browser, presumably setting an alias somewhere, I’m not sure what I did do?!

But I some point I did the following, which may or may not make sense…

# ================================================
#                  DON'T TRY THIS...
# ================================================

# I'm just trying to keep a note of what I originally did...

# I'm guessing is is where files were popped into
# the top level directory?
pip3 install --upgrade ./jupyterlab_empinken/

# ...

# At some point, I then went into the subdir
# created by the cookiecutter
cd jupyterlab_empinken/

# At this point, I was floundering because I 
# couldn't find a way to update the extension
# in my JupyterLab environment...

# So I tried everything I could find...
jupyter labextension develop . --overwrite
jlpm run build
jlpm run rebuild
python3 -m pip install -e .
jupyter labextension develop . --overwrite

# Something in that chain appeared to do the trick,
# because from this point, and ny whatever process,
# the following steps then reliably rebuilt 
# the extension and ran a version of JupyterLab 
# with the updated extension installed...
jlpm run build
jupyter lab

I also had an issue of being unable to uninstall the extension. This may or may not relate to this issue which hints at the need to manually remove files but doesn’t give a path as to where to remove files from. If the recommended dev route installs uninstallable files, it would be useful to provide a set of explicit manual removal instruction steps. In the end, I found a set of extension related files left over in /usr/local/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/share/jupyter/labextensions (the jupyter --paths command gives a list of places to search amongst the data: paths. Removing these (rm -r /usr/local/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/share/jupyter/labextensions/jupyterlab-empinken) and reloading JupyterLab seemed to do the trick of removing the extension from the running application.

In passing, I note this recent JupyterLab issue — extension-cookiecutter-ts out of sync with doc. I use plugin, and things seem to break if I just change it to extension if I do nothing else? O rmaybe I missed something… From the issue, I also don’t know if this means there’ll be a breaking change at some point?

So, let’s start again from the beginning, and see if the extension tutorial docs give us a minimal, exact and reproducible set of commands that will:

  • put the files in new directory in the locations I want them;
  • let me rebuild and preview the extension in a JupyterLab environment each time I edit the src/index.ts file, or the style/base.css file, or the schema/plugin.json file.

I’ll then try to remember to retrofit instructions back into earlier posts in this series…

Then, as I’d intended to do for this post before it became an interstitial post, I’ll have a go at creating a prebuilt extension that can be easily distributed in the next post.

But before we start, let’s just review the architecture, at least as I understand it. The core JupyterLab application runs in a browser and can be extended using JupyterLab extensions. An extension is a Javascript package that exports one more plugins that extend JupyterLab’s behaviour. The cookiecutter Python package provides a way of distributing a build environment for a JupyterLab extension. In part, installing the Python package installs front end component files into JupyterLab (somehow!). The cookiecutter package also bundlles various project files that support the compilation of JupyterLab extension Javascript files from source Typescript files. Building the JupyterLab extension creates the files that JupyterLab needs from the source files. When we actually distribute a JupyterLab extension, we can use the Python package machinery to bundle and install the JupyterLab extension Javascript, CSS and other sundry files into JupyterLab.

So let’s go back to the beginning, and to the cookiecutter package. The first question I need to answer is a really simple one: how do I get the cookiecutter to unpack its wares into the top level of the directory into which I want the files to go, rather than in a subdirectory?

Running the cookiecutter, it seems by default as if it won’t overwrite a pre-existing directory. However the -for --overwrite-if-exists flag will allow you to write over the pre-existing directory (the .git files are left untouched). If you don’t want files in the directory that already exists (such as a LICENSE file) overwriting, then also add the -s or--skip-if-file-exists flag. I note that the default license generated by the cookiecutter is the BSD 3-Clause License.

So if I am in my Github directory (assuming I used a repo (directory) name with underscores rather then - separators: the cookie-cutter converts all dashes to underscores…), I can run the cookiecutter with and then specify the repo directory name as the extension name to unpack the files in that directory.

cookiecutter --overwrite-if-exists--skip-if-file-exists

Or more concisely, cookiecutter -fs

When prompted for what sort of extension we want, select the default extension type, [1] - frontend; also set settings as y which will presumably seed a settings file for us.

Note that the cookiecutter adds a rather comprehensive .gitignore file to the directory. If you are looking to build and distribute an extension via the repo (see the next post in this series, hopefully!), then you will probably want to remove the dist/ entry from the automatically created .gitignore file so you can commit the disribution wheel to the repository.

The next step is to see if we can set the directory up so that we can easily update JupyterLab. The extension tutorial docs suggest the following approach.

First, we need to install the extension using pip. Installing the package “copies the frontend part of the extension into JupyterLab”. The docs suggest the flags -ve, but from the pip docs I can’t see the -v flag at all. The -e flag (--editable) installs the project in an editable mode (docs). This is like a faux install, in that rather than installing the package, a special .egg-link file is created that allows Python to treat the development repo as if it were the installed Python package. As the package files are updated in the directory, the changes are available as if the package had been updated via pip.

pip3 install -e .

Installing the package creates and downloads a large number of files to the package directory that are used to support the building of the JupyterLab extension package.

When I ran this command, it took a ridiculously long time (several minutes, or long enough to think it must have got stuck or failed somewhow) to install.

The development mode package can supposedly be uninstalled in the normal way:

pip3 uninstall PACKAGENAME

However, files may still be languishing… See the note above about tracking down copies of the extension files that are not removed by the uninstaller.

The docs suggest that “[w]e can run this pip install command again every time we make a change to copy the change into JupyterLab” which suggests that simply making “live” editable changes to the Python package (via the --editable mode) is not enough for the changes to be reflected in the JupyterLab environment. What we additionally need to to do is create a way that allows JupyterLab to make use of any updated JupyterLab package files, akin to the way we give the Python access to the “live” --editable package updates.

It’s not totally clear to me what files we need to “make live” in the Python package, or how and when the Python environment interacts with the cookiecutter generated Python package files.

The following command ensures that as we rebuild JupyterLab extension package files, they become available to JuptyerLab:

jupyter labextension develop --overwrite .

We now have a “live” development environment. As we update the extension package files, we run the following command to (re)buid the extension:

jlpm run build

Refreshing JupyterLab in the browser should now display the updated extension.

The JupyterLab Package Manager (jlpm) is a JupyterLab-provided, locked version of the yarn package manager intended to simplify JuptyerLab development. Handy commands include jlpm install to ensure that required node packages are installed and jlpm run build to build extension files.

In the next post (and hopefully the final post!) in this series, I’ll try to pull together all the pieces to show how to build and disitrubute a pre-built JupyterLab extension.

Notes on the JupyterLab Notebook HTML DOM Model, Part 9: Building and Distributing a Pre-Built Extension

So finally, finally… we’re at the point we can try to build an installable pre-built extension.

To test the build, it first makes sense to uninstall the version we’ve been developing to date. Running pip uninstall doesn’t necessarily do the job, as described in the previous post, so you may have to scrabble around searching for where the extension was installed so you can delete it yourself.

Building the extension is then relatively straightforward. In the project directory, make sure all the build tools are available:

pip3 install build

And then build the distribution, creating a wheel in dist/ directory:

python3 -m build

Usefully, it’s a platform agnostic wheel which means it should be installable in JupyterLite.

I noticed that changing adding stricter checks to tsconfig.json didn’t seem to get picked up after a successful build using my “let everything through” casual build, but the following command did rebuild the extension with all the re-enabled error checking switched back on:

jlpm clean && jlpm build:lib && jlpm build:labextension

Not surprisingly, my hacky Python flavoured Javascript style of Typescript raised all sorts of errors and suggests I really need to settle down to read a good (and recent) TypeScript book for a vcouple of days, but instead of that, I just switched all the error checking back off again:

  "compilerOptions": {
    "allowSyntheticDefaultImports": true,
    "composite": true,
    "declaration": true,
    "esModuleInterop": true,
    "incremental": true,
    "jsx": "react",
    "module": "esnext",
    "moduleResolution": "node",
    "noEmitOnError": true,
    "noImplicitAny": false,
    "noUnusedLocals": false,
    "preserveWatchOutput": true,
    "resolveJsonModule": true,
    "outDir": "lib",
    "rootDir": "src",
    "strict": true,
    "strictNullChecks": false,
    "target": "es2017",
    "types": []
  "include": ["src/*"]

I’m not sure if a (re)start of the JupyterLab server is required to see the updated extension in action, or whether we can get away with just refreshing the JupyterLab browser window.

To distribute the newly built wheel via the project repository, we need to commit it: remove the dist/ path from the .gitgnore file and the wheel should now be visible.

My example repo (and you have been warned in advance about the state of the “TypeScript”) can be found here: innovationOUtside/jupyterlab_empinken_extension

To install the wheel:

pip3 install --upgrade

The extension by default enables four buttons that can be used to toggle cell tags. The tags are parsed and used to class the notebook cells and then coloured accordingly (I really need to do something about the default colours!).

The tag state is saved in the notebook document so should persist. The actual convention used to define the tags is user customisable via extension settings, as are the background colours.

I reckon it’s taken me a couple of years and four and a half days to get this far. The code is not TypeScript, but a hacky version of Javascript, mostly, from somoneone who only ever tends to write casual Python, with odd bits of copy and pasted TypeScript from other extensions. I note that many TypeScript programmes seem to be rather slack in terms of TypeScript formalism too, so it’s not just me… I also note that trying to search for good examples on TypeScript sucks. I’m not sure if this is becuase websearch rankings broke since TypeScript became a thing, or because there aren’t many good TypeScript resources out there.

Even with occasional moments of success, I found the whole process really dispiriting. I am not convinced that I came up with an effective strategy for making sense of or navigating the docs, or the examples. As ever, I get the sense that the most useful resources are other extensions written by people who aren’t developers, because the code tends to be simpler, even though it can also be a bit ropey in terms of code quality. But the overheads of getting started mean that you need to be quite resilient to get as far as even a simple extension that works.

The current extension is limited to just reading state, processing tags and classing DOM elements. I’ll try one more (a port of innovationOUtside/nb_cell_execution_status) which will attempt to react to cell execution status signals: I think I have enough cribs to make a start, although I haven’t (yet?) found any really good resources on the message responses that might be expected or how to parse them. When that’s done, and if I can then get things working in a JupyterLab environment, I may try to get innovationOUtside/nbev3devsim working in a meaningful way in JupyterLab, and then I’ll be able to quit the whole JupyterLab space and hopefully make a start playing with thebe and JupyterLite kernels in Jupyter Book, which I think is a far more interesting space to work in.

Fragment – Custom display_formatter Rendering of Python Types & Rendering Custom MimeTypes in JupyterLab and RetroLab

From a passing tweet, I notice a post on Fine tuning your Jupyter notebook’s displays which includes a reminder of how to roll your own custom __repr__ methods:

and this rather neat treat for creating customised rich displays around built-in types using the IPython display_formatter, which lets you define a custom formatter for a typed object:

Also in passing, I note from the JupyterLab vega5-extension that you can create a simple extension to define a custom mimi-type renderer, allowing you to do things like:

from IPython.display import display

    "application/vnd.vegalite.v3+json": {
        "$schema": "",
        "description": "A simple bar chart with embedded data.",
        "data": {
            "values": [
                {"a": "A", "b": 28}, {"a": "B", "b": 55}, {"a": "C", "b": 43},
                {"a": "D", "b": 91}, {"a": "E", "b": 81}, {"a": "F", "b": 53},
                {"a": "G", "b": 19}, {"a": "H", "b": 87}, {"a": "I", "b": 52}
        "mark": "bar",
        "encoding": {
            "x": {"field": "a", "type": "ordinal"},
            "y": {"field": "b", "type": "quantitative"}
}, raw=True

This suggests the possiblility of custom renderers for different JSON objects / Python dicts etc implement via an extension rather than eg a (simpler?) IPython _repr_ method.

Agood example of a custom mime-type renderer is the deshaw/jupyterlab-skip-traceback extension. This extension mimics the behaviour of the classic notebook skip-traceback nbextension which provided a simplified, collapsible view onto Python traceback error messages.

The JupyterLab extension works by defining a custom handler for the application/vnd.jupyter.error mime-type, parsing the result and rendering the improved output.

Pondering — Admonition Tags for JupyterLab Markdown Cells?

In Fragments – Previewing Richly Formatted Jupyter Book Style Content Authored Using MyST-md I noted the JupyterLab-MyST extension which can render MyST admonition blocks included in a Markdown cell.

At the time, I wondered how easy it would be to used a cell tag on a markdown block instead of cluttering the markdown with the MyST admonition block code fences.

However, in rediscovering the agoose77/jupyterlab-imarkdown extension, which replicates the classic notebook Python Markdown extension that lets you embed and render Python variables within a markdown cell, I wonder how easy it would be to crib that extension to create a custom markdown cell renderer that could either:

  • parse the cell tags;
  • wrap the markdown cell content in an admonition block code fence;
  • continue with the JupyterLab-MyST rendering;


  • parse the cell tags;
  • wrap the markdown cell content in some admonition block HTML (resembling the output from the MyST extension, for example);
  • parse the HTML wrapped markdown content in the normal way;


  • parse the cell tags;
  • wrap the rendered cell output in appropriate admonition block HTML;
  • update the rendered cell output with the updated output.

In terms of getting a feel for working with the notebooks, it might be instructive to see how each of these appproaches could actually be implemented.

Poking around the JupyterLab-MyST extension, it seems as if it makes use of the @agoose77/jupyterlab-markup extension (import { simpleMarkdownItPlugin } from '@agoose77/jupyterlab-markup';) by extending the markdown-it pipeline it uses? One question that remains is how we get access to the cell metadata so that we can use that as the basis for wrapping the original cell markdown content appropriately? However, it seems that this isn’t possible in the jupyterlab-markup extension, which only has access to the original markdown cell content. Instead, we’d need a custom cell renderer such as the one provided by the jupyterlab-imarkdown extension (for a related query on how to reate custom markdown cell renderers, see Creating a Custom JupyterLab Notebook Markdown Cell Renderer).

For a discussion of the jupyterlab-markup plugin mechanism, see How do you add plugins?.

PS Ooh.. looking in I notice that there are some other neat integrations… like mermaid diagram rendering in markdown cells:

The extension also looks to use some WASM powered support for rendering svgbob ascii diagrams to SVG:

Demoing JupyterLab Extensions from an Extension Repo Using Github Pages and JupyterLite

Following on from Notes on the JupyterLab Notebook HTML DOM Model, Part 9: Building and Distributing a Pre-Built Extension, in which the build process generated a platform independent Python package wheel, jupyterlab_empinken_extension-0.1.1-py3-none-any.whl, I pushed it to pypi using a simple pypi-publish Github Action (you need to set a PYPI_PASSWORD API key (minted in your PyPi account from Account Settings (scroll down..) API tokens ) in the repo Secrets > Actions > Repository secrets settings in order to push packages to PyPi.

I generally set a PyPi token with permissions on all repos, push the package for the first time using that token, then mint a repo specific token and update the repo settings; I keep thinking there must be a more sensible way?!)

name: Publish Python distributions to PyPI

    types: [published]

    name: Build and publish Python distribution to PyPI
    runs-on: ubuntu-18.04
    - uses: actions/checkout@master
    - name: Set up Python 3.8
      uses: actions/setup-python@v1
        python-version: 3.8
    - name: Build a binary wheel and a source tarball
      run: |
        python3 -m pip install --user --upgrade setuptools wheel
        python3 -m pip install build
        python3 -m build
    - name: Publish distribution to PyPI
      # if: startsWith(github.event.ref, 'refs/tags')
      uses: pypa/gh-action-pypi-publish@master
        password: ${{ secrets.pypi_password }}

I also use a jupyterlite-pages action to push a JupyterLite distribution demoing the extension to Github Pages:

name: JupyterLite Build and Deploy

    types: [published]

    runs-on: ubuntu-latest
      - name: Checkout
        uses: actions/checkout@v2
      - name: Setup Python
        uses: actions/setup-python@v2
          python-version: 3.8
      - name: Install the dependencies
        run: |
          python -m pip install -r requirements-jupyterlite.txt
      - name: Build the JupyterLite site
        run: |
          cp content
          jupyter lite build --contents content
      - name: Upload (dist)
        uses: actions/upload-artifact@v2
          name: jupyterlite-demo-dist-${{ github.run_number }}
          path: ./_output

    if: github.ref == 'refs/heads/main'
    needs: [build]
    runs-on: ubuntu-latest
      - name: Checkout
        uses: actions/checkout@v2.3.1
      - uses: actions/download-artifact@v2
          name: jupyterlite-demo-dist-${{ github.run_number }}
          path: ./dist
      - name: Deploy
        uses: JamesIves/github-pages-deploy-action@4.1.3
          branch: gh-pages
          folder: dist

To use Github Pages (a website published from a specified repo branch and directory), you need to ensure that they are enabled from your repo Settings:

My actions are triggered by a release or (more usually) manually:

I really should make sure that the JupyterLite build follows on from a package release or an update to the content/ directory.

You might notice the JupyterLite distribution action references a requirements file (requirements-jupyterlite.txt). This file needs to include the JupyterLab and JuptyerLite packages, as well as any other (prebuilt) packages you want to install:

# Base install

# Extension package we want to demo

If the package you want to demo in JupyterLite is not available from PyPi, I wonder, can you specfy the wheel URL in the requirements file (eg If not, add a line to the jupyterlite distribution action along the lines of pip3 install

Any files you want included as part of the distribution, such as a demo notebook, should be placed in the repo content/ directory.

When the site is published, (and you might need to check in the repo settings that you have enabled Pages appropriately), you should be able to test your extension running in JupyterLab in your browser. For example, my innovationOUtside/jupyterlab_empinken_extension demo is at

Imagining “JupyterLab Magic Tags”: Tag Based Code Cell Execution Modifiers in JupyterLab

In Pondering — Admonition Tags for JupyterLab Markdown Cells? I finally for round to describing a feature I’d like to see in JupyterLab, specifically the ability to tweak the execution of markdown cells based on cell tags. I’ve already managed to tweak the style of cells based on cell tags (a missing feature which was one of the blockers I have had on adopting JupyerLab for several years) so this represents the next step.

In this post, I thought I’d make a note of another tag based feature I’ve wanted for a couple a years: the ability to modify cell execution based on cell tag metadata.

This might sound a bit odd, but it’s really just another take on cell magics, at least as far as the user might be concerned.

I make use of IPython magics block cell heavily to modify the behaviour of code cells, often in context of using the magic to invoke a Python function that will execute a function over content contained in the cell. This essentially allos you to define a sort of custom code cell type. A good example is provided by the catherinedevlin/ipython-sql extension which installs a sql magic that lets you connect to a database and then run SQL queries directly from a magicked code cell:

This is fine insofar as it goes, but I’d quite like to be able to abstract that a bit further into the UI, and use a cell tag, rather than a magic, to modify the cell behaviour.

Tagging a cell with ipython-magic-sql would have the same effect as adding %%sql to the cell. (Yes, I know there may be an issue with handling magic parameters; that’s left as a on open question for now…) A tagstyle could automatically render the cell in a way that highlights it has been magicked, making it clear in the UI that modified behaviout on the cell is to be expected.

In terms of handling the modified execution, one approach I can imagine would be to define custom code cell handler, similar to the custom markdown cell handler defined in the agoose77/jupyterlab-imarkdown extension (see also this related (as yet unanswered) query about Creating a Custom JupyterLab Notebook Markdown Cell Renderer; maybe I should try Stack Overflow again…).

A jupyterlab-magic-tag-template extension might provide one way of helping end-users create their own magic tags. For a magic with out paramteers, we might image a simple definition file:

block_magic: %%sql
package: ipython-sql

Note that in the case of the ipython-sql package, we would elsewhere have had to define the sql connection. Alternatively, this could be pre-loaded and enabled by the extension, possibly even in a separate kernel. If the extension supported user configuration settings, that might also provide a route for setting global magic parameters and having them added to the magic command whenever it is called.

Tagging a cell with SQL-MAGIC-TAG would then:

  • cause the cell to have slightly different style applied; this might be as simple as a coloured border, which could have a default value that is over-riden via a config setting;
  • when a cell is executed, the contents of the cell are prefixed by the magic invocation;
  • cell outputs are rendered as per normal.

It’s not hard to imagine tags also being used to support an element of polyglot language execution behaviour; for example, magic can be used to modify a cell to allow the execution of R code in an IPython notebook. But why not just let the user use a cell tag to invoke that behaviour?

PS I’m happy for folk to tell me why they think this is a ridiculous idea..! Feel free to post in the comments…


Having been going to storytelling festivals and events for 20+ years, and finally joining a stroytelling group in the summer before the first lockdown, I started telling tales at folk clubs on the island at the end of last Autumn.

last month, along with a couple of other Island Stroytellers, Sue and Holly, we did a charity gig as Three Island Storytellers:

Last night, I had a feature slot at the Waverley folk night in Carisbrooke, and got to do my first hand written set list…

More dates will hopefully be announced soon… ;-)

Tinkering Towards a JupyterLab Cell Status Indicator Extension

Having managed to build one extension (a JupyterLab version of empinken), I thought I’d have ago an another, a JuptyerLab version of the classic notebook innovationoutside/nb_cell_execution_status extension which colours a code cell run indication according to whether the cell is running (or queued for execution), has run successfully, or ran with an error.

You can try it, via JuptyerLite (howto), here: innovationoutside/nb_cell_execution_status

The code in its minimal form is really simple, although that isn’t to say it didn’t take me a disproportionate amount of time to find the methods I guessed might help implement the feature and then try to get the actual extension compiled and working. I still really, really, really struggle with the whole JupyterLab thing. The more I use it, the more I don’t want to, if that were possible…!;-)

As a starting point, I used the JupyterLab Typescript extension cookiecutter (e.g. as per here). I needed to import one additional package, import { NotebookActions } from '@jupyterlab/notebook';, which also need adding in the dependencies of the package.json file as "@jupyterlab/notebook": "^3.3.4".

I would have had a nice git diff from a single check-in after the “base” cookiecutter check-in, showing just what was required. But the reality is that there are dozens of commits in the repo showing the numerous craptastic and ignorant end-user-dev attempts I kept making to try to get things to work. Have I mentioned before how much I dislike eveything about JupyterLab, and its development process?!;-)

Essentially, all the extension requires that you subscribe to signals that fire when a code cell has been queued for execution, or when it completes execution.

Specifically, when one or more code cells are selected and run, a NotebookActions.executionScheduled signal is issued. We can subscribe to this signal, remove any classes previously added to the cell DOM in the notebook HTML that relate to the status of previous successful or unsuccessful cell execution (cell.inputArea.promptNode.classList.remove()), and add a new class (cell.inputArea.promptNode.classList.add()) to indicate a scheduled status.

When the cell has completed execution, a NotebookActions.executed signal raised. This also carries with it information regarding whether the cell executed successfully or not; we can remove the scheduled-or-executing class from the cell DOM in the notebook HTML, and then use this cell executed status to set a class identifying the executed status.

Note, there are also various notebook API methods, but I’m not really sure how to tap into those appropriately….

import {
} from '@jupyterlab/application';

import { NotebookActions } from '@jupyterlab/notebook';
//import { IKernelConnection } from '@jupyterlab/services/Kernel IKernelConnection';
import { ISettingRegistry } from '@jupyterlab/settingregistry';

 * Initialization data for the jupyterlab_cell_status extension.
const plugin: JupyterFrontEndPlugin<void> = {
  id: 'jupyterlab_cell_status_extension:plugin',
  autoStart: true,
  optional: [ISettingRegistry],//, IKernelConnection]

function activate (
  app: JupyterFrontEnd, settingRegistry: ISettingRegistry | null): void {
    console.log("jupyterlab_cell_status_extension:plugin activating...");
    // We can use settings to set the status colours
    if (settingRegistry) {
        .then(settings => {
          console.log("jupyterlab_cell_status_extension:plugin: loading settings...");
          const root = document.documentElement;
          const updateSettings = (): void => {
            const queue_color = settings.get('status_queue').composite as string;
            const success_color = settings.get('status_success').composite as string;
            const error_color = settings.get('status_error').composite as string;
  '--jp-cell-status-queue', queue_color);
  '--jp-cell-status-success', success_color);
  '--jp-cell-status-error', error_color);
          console.log("jupyterlab_cell_status_extension:plugin: loaded settings...");
          // We can auto update the color
        .catch(reason => {
          console.error('Failed to load settings for jupyterlab_cell_status_extension.', reason);

    // This was a start at sketching whether I could
    // reset things if the kernel was restarted.
    // Didn't get very far though?
    IKernelConnection.connectionStatusChanged.connect((kernel, conn_stat) => {

    console.log("KERNEL ****"+conn_stat)
    NotebookActions.executed.connect((_, args) => {
      // The following construction seems to say 
      // something akin to: const cell = args["cell"]
      const { cell } = args;
      const { success } = args;
      // If we have a code cell, update the status
      if (cell.model.type == 'code') {
        if (success)

    NotebookActions.executionScheduled.connect((_, args) => {
      const { cell } = args;
      // If we have a code cell
      // set the status class to "scheduled"
      // and remove the other classes
      if (cell.model.type == 'code') {

    console.log("jupyterlab_cell_status_extension:plugin activated...");

export default plugin;

To build a Python wheel for an installable version of the extension, I use the build command:

jlpm clean && jlpm run build && python3 -m build

When cell is running, or queued, by default the cell status area now highlights a blie. Successful cell execution is denote green, and unsuccessful cell execution is red.

Another disproportionate amount of time was spent trying to figure out why the settings weren’t being loaded. I tracked this down to a mismatch between the extension id defined in the index.ts file and the name in the schema/pugin.json file. It still didn’t work, but a repeated “comment out all the settings code, rebuild, commit to Github, republish the JupyerLite distro” process, uncommenting a line at a time revealed: no errors, and it had started working. So I have no idea what went wrong.

If the itereated process sounds faintly stupid, and it took maybe 5 minutes each iteration (two to build the extension, two to republish the JupyterLite distribution). That’s partly because I was suspicious that my local build process wasn’t looking on the correct path. (I know that JupyterLite builds don’t respect paths on my local machine but work fine when run on Github via a Github Action.)

Anyway, the settings seem to work now to allow you to set the status indicator colours, though after a change you need to close a notebook then open it again to see the change reflected.

If you close a notebook and open it again, whether or not you stop the kernel, then the status colouring is lost. I did start to wonder whether I should persist the status as notebook metadata (eg how to set tags can be found here) to allow a notebook opened against a still running kernel to show the cell status (classing the cell based on cell metadata), and then managing persistent tags and classes based on cell execution status and notebook kernel signals (eg removing all status indications if the kernel is stopped. I’m not sure how to access the notebook/cells from inside a IKernelConnection.connectionStatusChanged connected handler though? (My thinking is the cell status indicators should give you a reasonable indication of whether a particular cell has contributed to the kernel state. I wonder if there’s any code in nbsafety-project/nbsafety that might help me think this through a bit more?)

I guess the next step is to see if I can also add the audio indicators that I’d added to the classic notebook extension too…