Editing Text in the Browser

Via the Guardian Developer blog, a post — Leaving Scribe — describing how the Guardian is moving away from its Scribe in-browser text editor to a new one based on ProseMirror, an open-source toolkit “for building rich-text editors on the web” that is also used by the New York Times.

In-browser editors are not something I know much (i.e. anything) about, but the Leaving Scribe post provides a handy review of what’s good to know (like how markup is handled). Go and read it now…

It seems like the Guardian folk have many of the same issues as we do in the OU. For example:

Another area where HTML as a model falls down is editor-only annotations (markup that helps the writer but is detrimental to the reader). Take for example the need to highlight a word in the text that meets some criteria (a suggested tag, or some legal issue around using this word). You may want to show an inline annotation to ask the editor whether they want to add this as a tag.

The problem here is that now we have data that is not part of the document, and yet it is modelled as part of our document. This is technically solvable but again, the DOM API is not well suited for handling this sort of data modelling, especially when the usage of these features becomes more complex. As you start to force more complex features through an HTML data model you have to do more and more work to get around HTML’s limitations around modelling a rich text document and you hit more and more of the browser inconsistencies.

Features of ProseMirror based editors apparently include collaborative editing and an extensible schema. This last one is interesting from an OU perspective, because we have a workflow in which content is published from an internal XML document feedstock.

The important difference between Scribe and ProseMirror is that ProseMirror implements its own model layer that has a one-to-one mapping from semantics to the model, and an API that is made with document transformation in mind – not least collaborative editing.

An image representation of ProseMirror's model

In ProseMirror, inline content is flat rather than a tree, which means operations like changing styles on text don’t require any tree manipulation. And while nodes (h1, p, blockquote etc.) are still modelled as a tree but again, this accurately models how users think about things like paragraphs and lists, and it’s almost always how they’re rendered when consuming an article.

I’m not sure if the halted OU Create project was using ProseMirror? (I never really found out any technical details and I was banned from posting screenshots or discussing [di(scu)ssing?!] it in public!;-)

We hope in time to be able to get our editor to a point that it is able to be open-sourced but we’ll only do this if we believe we have the documentation and resource in place for that to be useful to users outside the Guardian.

Ah ha… It’d be nice if an OU solution could work in an open-sourcey way, or perhaps join forces with others to get such code out there…

One of the things I’ve been pondering lately is how to generate OU XML from Jupyter notebooks, as well how to demonstrate rich text authoring in notebooks using things like the jupyter-wysiwyg editor (I wonder how easy it is to modify that extension to work with other rich editors?)

So I wonder a couple of things:

  • how easy would it be to extend ProseMirror to support the OU XML schema?
  • could this customised editor then be used as a rich editor inside a Jupyter notebook markdown cell? (Would it need tweaks to the markdown2html renderer, or an OU-XML2HTML previewer?)

I’m also thinking that OU-XML has a lot of metadata elements which could be embedded as notebook metadata, with just a subset of the OU-XML being supported within the markdown cells. (Markdown cells could also have metadata associated with them.)

I think we could probably get a clunky workflow going quite quickly for authoring OU-XML docs from within Jupyter notebooks if anyone else was interested in exploring it with me…

First Play With nbgallery

Having hacked together a bulk uploader for nbgallery and uploaded the TM351 notebooks to a test environment, I’m now in a position to start having a play with it.

All public notebooks are searchable, so how does the search fare?

The search box top right gets a little bit lost in the search results listing. It could be handy to at least print out the search string (“Searching for: …”) at the top of the results list, if not making the search box larger and in a more central location. The search results themselves take the form of the name / description/tag of each hit (i.e. the notebook metadata) along with a fragment showing how the search terms appeared in context within the notebook.

Some of my earlier experiments on notebook search here and here also show context.

A range of options are provided for ordering the results. Trending looks like it could be interesting (this is based on recent views, presumably), for example where students are searching notebooks relevant to the current week’s study.

That said, we can also display notebooks by tag, so it’s easy enough to display notebooks associated with a particular week’s study if we tag notebooks by study week:

(One thing I noticed zooming out on the page to grab the above screenshot is that the font size of the notebook titles doesn’t seem to respond to the zoom level; it would probably be worth checking to see if there are other accessibility issues.)

If we click through on a result, we see a list of related notebooks followed by a preview of the notebook. (nbgallery strips out all cell outputs on upload, so no cell outputs are displayed).

To search through the preview, we can use a normal browser in-page search (ctrl/cmd-F).

A range of options are provided to support community activity around a notebook for logged in users, including the ability to “star” a notebook, provide feedback or add a comment:

Logged in users can also click on the notebook tags to edit them.

Via the Further options menu, users can view various notebook metrics, email a notebook, or propose a change request:

The metrics available include number of views, runs, stars and the edit history.

If comments have been provided, the number indicator by the comment flag shows how many comments have been received, although this only appears on the notebook page. There doesn’t appear to be an indicator of how many comments are associated with a notebook on the search results page, nor did I spot a general “recent comments” feed anywhere.

When you post a comment, there is no indication that you have done so and the form remains in place. You need to close it manually. (Hitting “Post Comment” again just pops up a “can’t do that” alert on the grounds that you’re trying to post a duplicate comment.)

The comments themselves look as if they are an ordered (rather than threaded) list. It also looks like any signed in used can edit anybody else’s comment?

Users who aren’t signed in can download a notebook, but not star it, comment on it, modify the tags etc.

When I tried to add feedback, I got an error:

I’m not sure if there are settings I need to tweak to address that?

Logged in users can also run a notebook from nbgallery via an associated notebook server. (I’d prefer it if the Run in Jupyter flash wasn’t displayed if there isn’t a linked notebook server available for the logged in user.) For example, running a notebook server on  port 443 on the same host as nbgallery using the nbgallery notebook container:

docker run --rm -p 443:443 -e "NBGALLERY_URL=http://localhost:3000" -e "NBGALLERY_CONFIG_TOKEN=letmein" nbgallery/jupyter-alpine

starts a notebook server with the nbgallery extension pre-installed.

We can view the notebook server homepage on https://localhost:443 and log into it using the token-as-password letmein. Running the container in the way described above also gives permission for the nbgallery server running in on http://localhost:3000 to open notebooks via the notebook server.

Within nbgallery itself, a logged in user can associate one of more Jupyter environments via the user menu:

Each environment is given a name and the URL of the associated notebook server (in this case, https://localhost:443):

When a notebook server is associated with a user, notebooks can be opened from nbgallery within the notebook server.

If we create a new notebook in the linked notebook server, we can upload it to nbgallery, adding a title, description and optional tags as in a manual notebook upload step:

If we modify the notebook that is linked to one in the gallery (that is, that has been uploaded to the gallery or launched from the gallery), we can save a change to the gallery or submit a change request:

When uploading a new version, you can add tags but not additional comments such as a commit message:

Viewing the notebook details in nbgallery, we can see a summary of the change history:

We can also click through to a preview of each version of the notebook:

(The revision number doesn’t appear in the change history though, so it can be hard to reconcile a particular version with it’s appearance in the change history listing.)

A logged in user can make a change request to someone else’s notebooks by uploading a new version of them or by opening the notebook in the linked notebook server and submitting a change request:

When I submitted the change request, I got an error form in response, but it looks like the change request was made, as this listing of Change Requests from the user menu suggests:

An exclamation mark by the user menu also identifies that change requests are pending.

Viewing the change request provides a view over the current version of the notebook and the proposed changes. Notebooks can be viewed alongside each other or the diffs can be viewed:

The thumbs up/down indicators are used to accept or deny a change request, along with a brief comment:

Accepted changed notebooks are used to replace the current version of the notebook, and the change logged in the change history. Denied change requests are recorded as such in the change requests list, with a link to the version of the notebook containing the unsuccessfully proposed changes:

If feedback was provided, a comment icon identifies its presence and pops up the feedback in a tooltip when hovered over.

Health stats for linked and run notebooks are supposed to be available, but I couldn’t get those to work (as far as the health stat reports were concerned, the notebooks were never run no matter how many times I ran them), so maybe I’m missing something there in the setup too? [UPDATE: health settings run with a flag set: notebook instrumentation docs; specifically, -e NBGALLERY_ENABLE_INSTRUMENTATION=1 in the docker command line.]

I’m not sure how well this would work for managing TM351 notebooks compared to out current Github workflow (which I should write up somewhere). The error responses (whether they’re valid or not) for change requests and feedback are confusing, and I’m not sure how the feedback is handled if and when it works. Not being able to easily spot new comments easily (unless I’m missing something) could be a bit of a pain. That said, the proof would be in the testing-through-use, so I’ll maybe give it a week or two’s trial with some of my own notebook workflows.

In terms of use with students, it could be useful to provide a version of nbgallery with notebooks runnable by students without them having to log in to it. It could also be useful if notebooks could be run ‘inline’ from the notebook preview pages, for example using something like ThebeLab or Voila, particularly if a particular Binderhub repo / config could be specified in metadata somewhere.

On Not Faffing Around With Jupyter Notebook Docker Container Auth Tokens

Mark this post as deprecated… There already exists an easy way of setting the token when starting one of the Jupyter notebook Docker containers: -e JUPYTER_TOKEN="easy; it's already there". In fact, things are even easier if you export JUPYTER_TOKEN='easy' in the local environment, and then start the container with docker run --rm -d --name democontainer -p 9999:8888 -e JUPYTER_TOKEN jupyter/base-notebook (which is equivalent to -e JUPYTER_TOKEN=$JUPYTER_TOKEN). You can then autolaunch into the notebook with open "http://localhost:9999?token=${JUPYTER_TOKEN}". H/t @minrk for that…

[UPDATE: an exercise in reinventing the wheel… This is why I should really do something else with my life…]

I know they’re there for good reason, but starting the official Jupyter containers requires that you enter a token created when you launch the container, which means you need to check the docker logs…

In terms of usability, this is a bit of a faff. For example, the example URL is not necessarily the correct one (it specifies the port the notebook is running on inside the container rather than the exposed port you have mapped it to.

If you start the container with a -d flag, you don’t see the token (something that looks like the token is printed out but it’s not the token, it’s docker created…). However, you can see the log stream containing the token using Kitematic.

If you go directly to the notebook page without the token argument, you’ll need to login with it, or with a default password (which is not set in the official Jupyter Docker images).

To provide continued authenticated access, you also have the opportunity at the bottom of that screen to swap the token for a new password (this is via the c.NotebookApp.allow_password_change setting which by default is set to True):

I think the difference between default token and password is that in the config file, if you specify a token via the c.NotebookApp.token argument, you do so in plain text, whereas the c.NotebookApp.password  setting takes an MD5 hashed value. If you set c.NotebookApp.token='', you can get in without a token. For a full set of config settings, see the Jupyter notebook config file and command line options.

So, can we balance the need for a small amount security without going to the extreme of disabling auth altogether?

Here’s a Dockerfile I’ve just popped together that allows you to build a variant of the official containers with support for tokenless or predefined token access:

#Dockerfile
FROM jupyter/minimal-notebook

#Configure container to support easier access
ARG TOKEN=-1
RUN mkdir -p $HOME/.jupyter/
RUN if [ $TOKEN!=-1 ]; then echo "c.NotebookApp.token='$TOKEN'" >> $HOME/.jupyter/jupyter_notebook_config.py; fi

We can then build variations on a theme as follows by running the following build commands in the same directory as the Dockerfile:

# Automatically generated token (default behaviour)
docker build -t psychemedia/quicknotebook .

# Tokenless access (no auth)
docker build -t psychemedia/quicknotebook --build-arg TOKEN='' .

# Specified one time token (set your own plain text one time token)
docker build -t psychemedia/quicknotebook --build-arg TOKEN='letmein' .

And some more handy administrative commands, just for the record:

#Run the container
docker run --rm -d -p 8899:8888 --name quicknotebook psychemedia/quicknotebook
##Or:
docker run --rm -d --expose 8888 --name quicknotebook psychemedia/quicknotebook

#Stop the container
docker kill quicknotebook

#Tidy up after running if you didn't --rm
docker rm quicknotebook

#Push container to Docker hub (must be logged in)
docker push psychemedia/quicknotebook

I’m also starting to wonder whether there’s an easy way of using Docker ENV vars (passed in the docker run command via a -e MYVAR='myval' pattern) to allow containers to be started up with a particular token, not just created with specified tokens at build time? That would take some messing around with the container start command though…

There’s a handy guide to Dockerfile ARG and ENV vars here: Docker ARG vs ENV.

Hmm… looking at the start.sh script that runs as part of the base notebook start CMD, it looks like there’s a /usr/local/bin/start-notebook.d/ directory that can contain files that are executed prior to the notebook server starting…

So we can presumably just hack that to take an environment variable?

So let’s extend the Dockerfile:

ENV TOKEN=$TOKEN
USER root
RUN mkdir -p /usr/local/bin/start-notebook.d/
RUN echo  "if [ \$TOKEN!=-1 ]; then echo \"c.NotebookApp.token='\$TOKEN'\" >> $HOME/.jupyter/jupyter_notebook_config.py; fi" >> /usr/local/bin/start-notebook.d/tokeneffort.sh
RUN chmod +x /usr/local/bin/start-notebook.d/tokeneffort.sh
USER $NB_USER

Now we should also be able to set a one time token when we run the container:

docker run -d -p 8899:8888 --name quicknotebook -e TOKEN='letmeout' psychemedia/quicknotebook

Useful? [Not really, completely pointless; passing the token as an environment variable is already supported (which raises the question; how come I’ve kept missing this trick?!) At best, it was a refresher in the use of Dockerfile ARG and ENV vars.]

Viewing Dockerised Desktops via an X11 Bridge, novnc and RDP, Sort of…

So… the story so far…

As regular readers of this blog will know, I happen to be of the opinion that we should package more of OUr software using Docker containers, for a couple of reasons (at least):

  • we control the software environment, including all required dependencies, and avoiding any conflicts with preinstalled software;
  • the same image can be used to launch containers locally or remotely.

I also happen to believe that we should deliver all UIs through a browser. Taken together with the containerised services, this means that students just need a browser to run course related software. Which could be on their phone, for all I care.

I keep umming and ahhing about electron apps. If the apps that are electron wrapped are also packaged so that they can also be run as a service and accessed via a browser too, that’s fine, I guess…

There are some cases in which this won’t work. For example, not all applications we may want to distribute come with an HTML UI, but instead may be native applications (which is an issue becuase we are supposed to be platform independent), or cross platform applications that use native widgets (for example, Java apps, or electron apps).

One way round this is to run a desktop application in container and then expose its UI using X11, (aka the X Window System), although this looks like it may be on the way out in favour of other windowing alternatives, such as Wayland… See also Chrome OS Is Working To Remove The Last Of Its X11 Dependencies. (I am so out of my depth here!)

Although X11 does provide a way of rendering windows created on a remote (or containerised guest) system using native windows on your own desktop, a downside is that it requires X11 support on your own machine; and I haven’t found a cross-platform one that looks to be a popular de facto standard.

Another approach is to use VNC, in which the remote (or guest) system sends a compressed rendered version of the desktop back to your machine, which then renders it. (See X11 on Raspberry Pi – remote login from your laptop for a discussion of some of the similarities and differences between X11 and VNC.)

Note to self – one of the issues I’ve had with VNC is the low screen resolution of the rendered desktop… but is that just because I used a default low resolution in the remote VNC server? Another issue I’ve had in the past with novnc, a VNC client that renders desktops using HTML via a browser window, relates to video and audio support… Video is okay, but VNC doesn’t do audio?

Earlier today, I came across x11docker, that claims to run GUI applications and desktops in docker (though on Windows and Linux desktops only. The idea is that you “just type x11docker IMAGENAME [COMMAND]” to launch a container and an X11 connection is made that allows the application to be rendered in a native X11 window. You can find a recipe for doing something similar on a Mac here: Running GUI’s with Docker on Mac OS X.

But that all seems a little fiddly, not least because of a dependency on an X11 client which might need to be separately installed. However, it seems that we can use another Docker container — JAremko/docker-x11-bridge — running xpra (“an open-source multi-platform persistent remote display server and client for forwarding applications and desktop screens”) as bridge that can connect to an X11 serving docker container and render the desktop in a browser.

For example, Jess Frazelle’s collection of Dockerfiles containerise all manner of desktop applications (though I couldn’t get them all to work over X11; maybe I wasn’t starting the containers correctly?). I can get them running, in my browser, by starting the bridge:

docker run -d \
 --name x11-bridge \
 -e MODE="tcp" \
 -e XPRA_HTML="yes" \
 -e DISPLAY=:14 \
 -p 10000:10000 \
 jare/x11-bridge

and then firing up a couple of applications:

docker run -d --rm  \
  --name firefox \
  --volumes-from x11-bridge \
  -e DISPLAY=:14 \
  jess/firefox

docker run -d --rm  \
  --name gimp \
  --volumes-from x11-bridge \
  -e DISPLAY=:14 \
  jess/gimp

#Housekeeping
#docker kill gimp firefox
#docker rm gimp firefox
#docker rmi jess/gimp jess/firefox

Another approach is to use VNC within a container, an approach I’ve used with this DIT4C Inspired RobotLab Container/ (The DIT4C container is quite old now; perhaps there’s something more recent I should use? In particular, audio support was lacking.)

It’s been a while since I had a look around for good examples of novnc containers, but this Collection of Docker images with headless VNC environments could be a useful start:

Desktop environment Xfce4 or IceWM
VNC-Server (default VNC port 5901)
noVNC – HTML5 VNC client (default http port 6901)

The containers also allow screen resolution and colour depth to be set via environment variables. The demo seems to work (without audio) using novnc in a browser, and I can connect using TigerVNC to the VNC port, though again, without audio support.

Audio is a pain. On a Linux machine, you can mount an audio device when you start a novnc container (eg fcwu/docker-ubuntu-vnc-desktop) but I’m not sure if that works on a Mac? Or how it’d work on Windows?) That said, a few years ago I did find a recipe for getting audio out of a remote container that did seem to work — More Docker Doodlings – Accessing GUI Apps Via a Browser from a Container Using Guacamole — although it seems to be broken now (did the container format change in that period I wonder?). Is there a more recent (and robust) variant of this out there somewhere, I wonder?

Hmm… here’s another approach: using a remote desktop client. Microsoft produce RDP (Remote Desktop Protocol) clients for different platforms so that might provide a useful starting point.

This repo — danielguerra69/firefox-rdp — builds on danielguerra69/dockergui (fork of this) and shows how to create a container running Firefox that can be accessed via RDP. If I run it:

docker run --rm -d --shm-size 1g -p 3389:3389 --name firefox danielguerra/firefox-rdp

I can create a connection using the Microsoft remote desktop client at the address localhost:3389, login with my Mac credentials, and then use the application. Testing on Youtube shows that video and audio work too. So that’s promising…

(Docker housekeeping: docker kill firefox; docker rm firefox; docker rmi danielguerra/firefox-rdp.)

Hmmm… so maybe now we’re getting somewhere more recent. Eg danielguerra69/ubuntu-xrdp although this doesn’t render the desktop properly for me, and danielguerra69/alpine-xfce4-xrdp doesn’t play out the audio? Ah, well… I’ve wasted enough time on this for today…

Running OpenRefine in the Clear on Digital Ocean

In a couple of earlier posts, I’ve described how to get OpenRefine up and running remotely over the web by installing the OpenRefine server onto a Digital Ocean Linux server and running it there behind a simple authenticating proxy(Running OpenRefine On Digital Ocean Using Simple Auth and the more automated Authenticated OpenRefine Server on Digital Ocean, Redux).

In this post I’ll show how to set up a simple OpenRefine server, without authentication, using Docker (I’ll show how to add in the authenicating nginx proxy in a follow on post).

Docker is a virtualisation technology that heavily draws on the idea of “containers”, isolated computational environments that provide just enough operating system to run a particular application within them.

As well as hosting raw Linux servers, Digital Ocean also provides Linux-servers-with-docker as a one-click application.

Here’s how to start a docker machine on Digital Ocean.

Creating a Digital Ocean Docker Droplet

First up, create a new droplet as a one-click app, selecting docker as the one-click application type:

To give ourselves some space to work with, I’m going to choose the 3GB server (it may work with default settings in a 2GB server, or it may ruin your day…). It’s metered by the hour, but it’ll still only cost a few pennies for a quick demo. (You can also get $100 free credit as a new user if you sign up here.)

DigitalOcean_-_Create_Droplets_

Select a data center region (I typically go for a local one):

If you want to, add your SSH key (recipe here, but it’s not really necessary: the ssh key just makes it easier for you to login to the server from your own computer if you need to. If you haven’t heard of ssh keys before, ignore this step!)

Hit the big green button to create your droplet (if you want to, give the sever a nicer hostname first…).

Accessing the Digital Ocean Droplet Server Terminal

Your one-click docker server will now start up. Once its there (it should take less than a minute) click through to its admin page. Assuming you haven’t added ssh keys, you’ll need to log in through the console. The login details for your server should have been emailed to the email address associated with your Digital Ocean account. Use them to login.

On first login, you’ll be prompted to change the password (it was emailed to you in plain text after all!)

If you choose a really simple replacement password, you may need to choose another one. Also note that the (current) UNIX password was the one you were emailed, so you’ll essentially be providing this password twice in quick succession (once for the first login, then again to authorise the enforced password change). Copy and pasting the password into the console from your email should work…

Once you’ve changed your password, you’ll be logged out and you’ll have to log back in again with your new password. (Isn’t security a faff?! That’s why ssh keys…!;-)

Now you get to install and launch OpenRefine. I’ve got an example image here, and the recipe for creating it here, but you don’t need really need to look at that if you trust me…

What you do need to do is run:

docker run -d -p 3333:3333 --name openrefine psychemedia/openrefinedemo

What this command does is download and run the container psychemedia/openrefinedemo, naming it (purely for our convenience) as openrefine.

You can learn how to create an OpenRefine docker image here: How to Create a Simple Dockerfile for Building an OpenRefine Docker Image.

The -d flags runs the container in “detached”, standalone mode (in the background, essentially). The -p 3333:3333 is read as -p PUBLICPORT:INTERNALPORT. The OpenRefine server is started on INTERNALPORT=3333 and we’re also going to view it on a URL port 3333.

The container will take a few seconds to download if this is the first time you’ve called for it:

and then it’ll a print out a long id number once it’s launched and running the background.

(You can check it’s running by running the command docker ps.)

In both the terminal and the droplet admin pages, as well as the droplet status line in the current droplet listing pages, you should see the public IP address associated with the droplet. Copy that address into your browser and add the port mapping (:3333). You should now be able to see a running version of OpenRefine. (And so should anyone else who wanders by that URL:PORT combination…)

Let’s now move the application to another port. We could do this by launching another container, with a new unique name (container names, when we assign them, need to be unique) and assigned to another port. (The OpenRefine internal service port will remain the same). For example:

docker run -d -p 3334:3333 --name openrefine2 psychemedia/openrefinedemo

This creates a new container running a fresh instance of the OpenRefine server. You should see it on IPADDRESS:3334.

(Alternatively we can omit the name and a random one will be assigned, for example, docker run -d -p 3335:3333)

Note that the docker image does not need to be downloaded again. We simply reuse the one we downloaded previously, and spawn a new instance of it as a new container.

Each container does take up memory though, so kill the original container:

docker kill openrefine

and remove it:

docker rm openrefine.

For a last quick demo, let’s create a new instance of the contain, once again called openrefine (assuming we’ve removed the one previously called that) and run it on port 80, the default http port, which means we should be able to see it directly by going to just the IPADDRESS (with no port specified) in our browser:

docker run -d -p 80:3333 --name openrefine psychemedia/openrefinedemo

When you’re done, you can halt the droplet (in which case, you’ll keep on paying rent for it) or destroy it (which means you won’t be billed for any additional hours, or parts thereof, on top of the time you’ve already been running the droplet):

You don’t need to tidy up around the docker containers, they’ll die with the droplet.

So, not all that hard, is it? Probably a darn sight easier than trying to get anything out of your local IT unit?!

In the next post, I’ll show how to combine the container with another one containing nginx to provide some simple authentication. (There are lots of prebuilt containers out there we can just take “off-the-shelf”, and nginx is one of them.) I’ll maybe also have a look at how you might persist projects in hibernating container / droplet, perhaps look at how we might be able to upload files that OpenRefine can work on, and maybe even try to figure out a way to simply synch your project files from Digital Ocean to your own file storage location somewhere. Maybe…

PS third party nginx proxy example: https://github.com/beevelop/docker-nginx-basic-auth

Self-help Edjucashun

Never having learned to read music or play a musical instrument as a kid, I’m finding learning to play the harp quite incredible. The feedback loops between seeing marks on paper, speaking out the name of each note played (as recommended by several of the guides/tutorials I’ve seen), developing muscle memory and hearing audio feedback is just an amazing learning experience.

Progress is slow, and I’m struggling with metre and note length. I really should get a lesson or two with a teacher, not least so I can hear what my elementary practice tunes are supposed to sound like. (I have no idea what sort of models Google is building around all the Youtube videos of young children I seem to be watching (kids doing their practice pieces… You can probably imagine the level I’m at given I aspire to be that good!))

So… self-help… there’s loads of music related web apps out there, so I figured it might be useful to try to transcribe some of my practice tunes into a form that I can get some idea of what they should sound like.

The language I’ve opted for is abcjs (repo) which I discovered via the music21 package (see some music21 demos here) ; but it doesn’t need any of the Python machinery to run — it works directly in the browser.

Here’s an example of what it looks like:

X: 1
T: Blue Bells of Scotland
M: 4/4
L: 1/8
K: C
V:R
G2|c4B2A2|G4A2Bc|z8|z4z2G2|
V:L
z2|z8|z8|E2E2E2D2|C6z2|
V:R
|c4B2A2|G4A2Bc|z8|z8|
V:L
|z8|z8|E2E2F2D2|C6G2|
V:R
z8|c4G2Bc|B2G2A2B2|G4A2B2|
V:L
E2C2E2G2|z8|z8|z8|
V:R
c4B2A2|G4A2Bc|z8|z6|]
V:L
z8|z8|E2E2F2D2|C6|]

The M field gives the meter, the L the unit note length for the piece, and K is the key. V:R and V:L record right and left hand staves. Each separate line in the abcjs script corresponds to a separate line of music.

There are some handy notes (doh!) here — How to understand abc (the basics)
— and a some more complete docs here: The abc music standard 2.1 .

I’ve found that transcribing from sheet music to abcjs notation is also helping my music reading. The editor I use — https://abcjs.net/abcjs-editor.html — provides live rendering of the notes, so it’s easy to get visual feedback as I write in the notation about whether I’ve read to myself, and written, the correct one.

(The red highlight in the score follows the cursor position in the text editor.)

As well as live rendering of the score as you transcribe, you can also play back the tune using the embedded the music player. (I’m not sure if its possible to change the instrument type? It defaults to a sort-of piano…) The tempo is set by the Q parameter in beats per minute, so it’s easy enough to speed up and slow down the playback.

FWIW, I’ll start popping related tinkerings and doodlin’s here: psychemedia/harperin-onabcjs will also support adding things like fingerings for each note, but I don’t want to break copyright too much when I do post transcribed scores, so I’ll be omitting that…

As far as learning goes, learning to write abcjs will also help me learn to read music better, I think, as well as reading it a bit more deeply.

It’s ages since I learned a new sort of thing (though I have also been trying to learn Polish pronunciation so I can sound out names appropriately in a history of Poland I’m reading at the moment). It’s fun, isn’t it?! And soooo time disappearing…

What is Coding?

I have no idea…

Here’s a first attempt:

the act of creating machine readable representations using formal syntax.

Which is to say:

  • act: something practical, possibly purposive, (so should that be intentional act?), which also makes it to be a skill and a craft?
  • creating: so it’s about doing something new, that also admits of having to solve problems along the way, perhaps be inventive, and playful.
  • machine readable: so coding produces something that a computer is capable of processing; does this implicitly unpacks further though, to take in notions of the machine actually processing the code  in order to bring about some sort of state transformation? So maybe replace with machine readable with machine interpretable and executable? But you don’t have to execute code? Eg if I encode a mathematical formula in LaTex, the machine will interpret that code to render the typographically laid out equation, but it hasn’t executed the code. So maybe machine interpretable and/or executable?
  • representations: this is not so much about what the code looks like to us, but the way we use it to create models that represent something “meaningful” to us in a way that the machine can process it in a way that is also meaningful to us. Again, this admits of problem solving and the need to be creative, but also starts to bring in unstated ideas that the representation somehow needs to be coherent and stand in some sort of sensible relationship to the sort of thing the code things they are representing?
  • using: so coding is about doing something with something…
  • formal: …that something being formally defined and bound/constrained…
  • syntax: …by a set of rules that determine how the representations are declared and the form in which those representations should be stated. Does adding and grammar help? Do programming languages add grammatical elements over and above syntactic rules? Is dot notation, for example,  a morphological feature or a syntactic one?

Note that there is nothing in there that distinguishes between text based languages and graphical languages (for example). Nor is the word language mentioned explicitly.