On Strikes and Publishing…

Being a member of the union, I’m on strike for as long as it lasts. One of the grounds for the strike is manageable workloads, so I was rather surprised to be asked yesterday evening (erm… evening…;-) to comment on the final version / revisions in light of reviewers’ comments, of a paper I’m named on that needs to be returned before the strike is over.

My formal academic publishing record is so poor I guess I shouldn’t begrudge any opportunity to get entered into the REF, but there’s a but…

One of the issues I have with academic publishing is the relationship between academia and the publishing industry. The labour and intellectual property rights are gifted by academics and academic institutions to the publishers, then the academic institutions pay the publishers to access the content.

As an employee of a university, my contract has something to say about intellectual property rights; I’m also pretty sure I’m not allowed to enter the institution into legally binding contracts. However, it’s par for the course for academics to sign over intellectual property rights in the form of copyright to academic publishers. (I’ve never really been convinced they/we are legally entitled to do so?)

But that’s not the issue here. Strikes are intended to cause disruption to the activities of the organisation the strikers are employed by. We’re on strike. Partly over workloads. Universities benefit from their academics publishing in academic journals in a variety of ways (and yes, I do know I’ve not played my part in this for years, ever since a researcher on a temporary contract I was publishing with was let go; IIRC, I offered 10% of my salary, 20% if needed be, to help keep them on till we managed to find some funding, even though internal money was around at the time; it would have been in my interest, academically speaking and career progression wise…).

So… the strike is an opportunity to raise concerns through causing disruption.

One of the current strike concerns is workload. Universities either value academic publishing or they don’t. If they do, providing time in work time to publish is part of that contract. On the other hand, an academic makes themselves more employable by having a better publishing record, so using strike time on “personal brand boosting” academic publishing gives the academic power when it comes to personal negotiations with the academy, for example over salary grading, or when threatening to leave. (Many universities, I think, can suddenly find a Chair to offer to someone who has been offered a Chair elsewhere in an attempt to retain them…)

But if workload is a legitimate issue, then engaging in an activity that an institution may sideline on the grounds that they know the academic will use their own personal time, including strike time, to pursue, seems counter to the strike’s concerns?

Academic publishers and conferences may actually benefit from the strike too, in terms of time being freed up by strike action for such activity (Lorna Campbell posted eloquently on a related dilemma yesterday in terms of what to do regarding attendance of events taking place during, but booked prior to, strike action being called: Where to draw the line?).

Whilst the strike is directed at the employers rather than the publishers, when it comes to workload, surely the way the employer-publisher complex is organised is part of the problem? So should the strike not also be directed at the publishers? If journal issues or conference plans are disrupted, isn’t that part of the point? (And yes, I do know: many academic conferences are organised by academics; I used to organise workshop sessions myself; but some also have a commercial element…)

Another of the issues the union keeps returning to is the question of pensions. Academic authors, signing away as they do intellectual property rights that may be theirs, or may be their employers, also sign away pension pin money in the form of royalties they don’t otherwise receive.

Whilst teaching myself R a few years ago, I kept notes and published them as a self-published book on Leanpub. The royalties from it only ever trickled in, but they cover my Dropbox and WRC+ subscription costs and buy me the odd ticket to go and see the touring cars or historics. At the time, I started sketching out how many self-published books I’d need to eke out a living on; I had enough blog posts on Gephi, OpenRefine and various data journalism recipes to be able to pull a couple of manuals together in quite quick time, but figured I’d probably need to crank out a quick manual every couple of months to make a go of it and rely on organic sales without engaging in any marketing activity.

One of the struggles I have with strikes is knowing how to spend my time whilst on strike given that I am supposed to remain available for work, and then deliberately withdraw my labour, rather than take the time as a de facto holiday. Idly wondering about what the point of the strike is, and what it’s supposed to achieve, is part of the strike action I take (as I realise from previous posts on strike days, such as On (“)Strike(“) <- once again, WordPress misbehaves…).

And one thing this post has got me wondering about is: should academics go on strike against the publishers?

PS thinks: one of the purposes of strike disruption is to get folk who may be being disrupted but who sympathise with your cause to help lobby on your behalf. If academic strikes against employers also mean not supplying publishers, the publishers may then also start to lobby the employers on behalf of the striking academics becuase they don't want their businesses disrupted… Hmm.. Strange bedfellows… My enemy's enemy is my friend…

PPS Double thinks: not publishing affects the REF, so by not using strike time to get ahead with a research paper, you put more pressure on the organisation who feels its REF returns may get hit? Rather than using the the stike time to potentially improve the institution's REF return? (And yes, I know: as well as your own… But strikes do involve self-sacrfice; that's also part of the point: that you are willing to do something that may cause you short-term harm on the way to improving conditions for everyone in the longer term.)

On (Not) Working With Open Source Software Packages

An aside observation on working with open source software packages (which I benefit from on a daily basis. The following is not intended as a particular criticism, it’s me reflecting on things I think I’ve spotted and which may help me contribute back more effectively.)

There are probably lots of ways of slicing and dicing how folk engage with open source projects, but I’m going to cut it this way:

  • maintainer;
  • contributor;
  • interested user.

The maintainer owns the repo and has the ultimate say; a contributor is someone who provides pull requests (PRs) and as such, tries to contribute code in; an interested user is someone who uses the package and knows the repo exists…

The maintainer is ultimately responsible for whether PRs are accepted.

I generally class myself as an interested user; if I find a problem, I try to raise a sensible issue; I also probably abuse issues by chipping in feature requests or asking support questions that may be better asked on Stack Overflow or within a project’s chat community or forums if it has them. (The problem with the latter is that sometimes they can be hard to find, sometimes they require sign on / auth; if I submit an issue to them, it’s also yet another place I need to keep track of to look for replies.)

On occasion, I do come up with code fragements that I share back into issues; on rare occasions, I make PRs.

The reasons I don’t step up more to “contributor” level are severalfold:

  • my code sucks;
  • I have a style problem…
    • I don’t use linters, though this is something I need to address;
    • I don’t really know how to run a linter properly over a codebase;
  • I don’t know how to:
    a) write tests;
    b) write tests properly;
    c) run tests over a codebase.
  • I don’t read documentation as thoroughly as perhaps I should…

Essentially, my software engineering skills suck. And yes, I know this is something I could / should work on, but I am really habituated to my own bad practice, stream-of-consciousness coding style…

One of the things I have noticed about stepping up is that is can be hard to step-up all the way, particularly in projects where the software engineering standards of the maintainer are enforced by the maintainer, and the contributors‘ contributions (for whatever reason: lack of time; lack of knowledge; lack of skills) don’t meet those standards.

What this means is that PRs that work for the contributor but don’t meet the standards of the maintainer, and the PR just sits, unaccepted, for months or years.

For the interested user, if they want the functionality of the PR, they may then be forced into using the fork created by the contributor.

However, a downside of this is that the PR may have been created by the contributor to fix an immediate does, does the job they need at the time, they use it, and move on, but as a goodwill gesture chip the PR in.

In such a case, the contributor may not have a long time commitment to the package (they may just have needed for a one off) so the overhead of building in tests that integrate well with the current test suite may be an additioanl overhead. (You could argue that they should have written tests anyway, but if it was a one off they may have been coding fast and using a “does it work”: metric as an implicit test on just the situation they needed to code to work in. Which raises another issue: a contributor may need code to work in a special case, but the maintainer needs it to work in the general case.)

For the contributor who just wanted to get something working, ensuring that the code style meets the maintainer’s standards is another overhead.

The commitment of the contributor to the project (and by that, I also mean their commitment in the sense of using the package regularly rather than as a one off, or perhaps more subtly, their commitment to using the package regularly and their PR regularly) perhaps has an impact on whether they value the PR actually making it into master. If they are likley to use the feature regularly, it’s in their interest to see it get into the main codebase. If they use it as a one off, or only regularly, their original PR may suffice. A downside of this is that over time, the code in the PR may well start to lag behind that of code in master. Which can cause a problem for a user who wants to use the latest master features and the niche feature (implemented off a now deprecated master) in the PR.

For the contributor, they may also not want to have to continue to maintain their contribution, and the maintainer may well have the same feeling: they’re happy to include the code but don’t necessarily want to have to maintain it, or even build on it (one good reason for writing packages that support plugin mechanisms, maybe? Extensions are maintained outside the core project and plugged in as required.)

By the by, a couple of examples that illustrate this if I return to this idea and try to pick it apart a bit further and test it against actual projects (I’m not intending to be critical about either the packages or the project participants; I use both these packages and value them highly; they just flag up issues I notice as a user):

  • integrating OpenSheetMusic (a javascript music score viewer that is ideal for rendering sheet music in Jupyter notebooks) into music21; an issue resulted in code that made it as far as a PR that was rejected, iterated on, but still fails a couple of minor checks…
  • hiding the display of a code cell in documentation generated by nbsphinx. There are several related issues (for example, this one, which refers to a couple of others) and two PRs, one of which has been sitting there for three years…

Now it may be that in the above case, the issues are both niche and relate to enabling or opening up ways of using the original packages that go beyond the original project’s mission, and the PRs are perhaps ways of the contributor co-opting the package to do something it wasn’t originally intended to do.

For example, the OpenSheetMusic display PR is really powerful for users wanting to use music21 in a Jupyter notebook, but this may be an environment that the current package community doesn’t use. Whilst the PR may make the package more likely to be used by notebook users and grow the community, it’s not core to the current community. (TBH, I haven’t really looked at how the music21 package has been used: a) at all, b) in the notebook community, for the last year or so. The lack of OpenSheetMusic support has been one reason why I drifted away from looking at music packages…)

In the case of nbsphinx which was perhaps developed as a documentation production tool, and as such benefits code always being displayed, the ability to hide input cells makes it really useful as a tool for publishing pages where the code is used to generate assets that are displayed in the page, but the means of production of those assets does not need to be shown. For example, a page that embeds a map generated from code: the intention is to publish the map, not show the code what demonstrates how to produce the map. (Note: hiding input can work in three ways: a) the input is completely removed from the published doc; b) the input is in the doc, but commented out, so it is not displayed in the rendered form; c) the code is hidden in the rendered form but can also be revealed.)

In both the above cases, I wonder whether the PR going outside the current community’s needs provides one of the reasons why the PRs don’t get integrated? For example, the PR might open the package to a community that doesn’t currently use the package, by enabling a necessary feature required by that new community. The original community may see the new use as “out-of-scope”, but under this lens we might ask: is there a question of territoriality in play? (“This package is not for that…”)

Republishing OpenLearn Materials In Markdown – Next Steps Taken…

Following on from yesterday’s post, I made a little more progress today trying to sort out a workflow.

First up I had a look at my binder-base-boxes to see if I could automate the building of those using repo2docker. It seems I can and there is an example build at binder-examples/continuous-build as referenced from the repo2docker docs: Using repo2docker as part of your Continuous Integration.

I needed to make a slight tweak to the CircleCI config to allow pushing containers built in repo branches to Dockerhub, but it was easy enough to spot where (removing the lines that limited builds to only run in master). There’s also a slight complication in that my choice of Github repo name has a - in it, and said symbol is disallowed in DockerHub repo names; so rather than just lazily use the repo orgname when pushing the image, I had to set another org name (without the -) as an env var in my CircleCI project profile that the script could pull on (support for this is built in to the script). I also added a tweak to the container naming to use the branch name as the container image tag. There’s an example box here: binder-base-boxes:chemistry, though I haven’t tried to use it as part of a CircleCI build yet… (I guess need to check it includes CircleCi required packages…) The associated DockerHub repo is here.

So that’s one dangling jigsaw piece…

I also created a template repo for publishing Github pages sites using nbsphinx under CircleCi. This should have all you need to get going dumping a load of .md files into a repo and then automatically publishing it under CircleCI to Github Pages. (Actually, I probably need to add a few docs to the README…) There’s an example repo here — markdown version of OpenLearn course: The molecular world and site here: The molecular world – OpenLearm Reimagined.

Next on the to do list:

  • automatically generate a simple index.rst file;
  • sort out image dereferencing for nested directories (path to a common image dir);
  • put together a reusable script or CLI tool that can download and generate a set of markdown documents from the OU-XML source of an OpenLearn module given an OpenLearn course URL and generate the md, with derefenced image links from it.

What this would then do is make it easy for anyone to convert an OpenLearn course that has a source OU-XML document to an equivalent markdown source site that can be automatically republished as an HTML site and that they can edit directly in the markdown source on Github.

The other major workflow issue I need to sort out is how best to manage “Binder” environments required to execute documents via Jupytext as part of the nbsphinx publishing step. (The chemistry base box takes quite a long time to build, for example, so if it’s used to build pages as part of an nbsphinx workflow it would be good to be able to pull a cached build in CircleCI (I really need to get my head round CircleCI cacheing) or use a prebuilt Docker image.)

There’s also thinking needs doing about the differences between a publishing step where a notebook is executed and that generates eg some HTML/JS that can be embedded and work standalone as an interactive on Github Pages vs. interactive widgets that need a Jupyter server on the back end to work. I’ve already spotted at least one opportunity for recasting an ipywidgets decorated function that generates views over different 3D molecules to a simple “pure” JS display that works without the need for the py function on the backend. Related to this I need to explore ThebeLab and nbinteract support in nbsphinx. If anyone has demos, please share… :-)

OER Text Publishing Workflows Rooted on OpenLearn OU-XML Via Github, CircleCI and Github Pages Using Jupytext and nbSphinx

Slowly, slowly, my recipes are coming together for generating markdown from OU-XML sourced, variously, from modules on the OU VLE and units on OpenLearn.

The code needs a couple more passes through but at some point I should be able to pull a simple CLI together (hopefully! I’m still manually running some handcranked steps spread across a couple of notebooks at the moment:-(

So… where am I currently at?

First up, I have chunks of code that can generate markdown from OU-XML, sort of. The XSLT is still a bit ropey (lists are occasionally broken[FIXED], for example, repeating the text) and the image link reconciliation for OpenLearn images doesn’t work, although I may have a way of accessing the images directly from the OU-XML image paths. (There could still be image rights issues if I was archiving the images in my own repo, which perhaps counts as a redistribution step…?)

The markdown can be handled in various ways.

Firstly, it can be edited/viewed as markdown. Chatting to colleague Jon Rosewell the other day, I realised that JupyterLab provides one way of editing and previewing markdown: in the JupyterLab file browser, right click on an .md file and you should be able to preview it:

There is also a WYSIWYG editor extension for JupyterLab (which looks like it may enter core at some point): Jupyter Scribe / jupyterlab-richtext-mode.

If you have Jupytext installed, then clicking on an .md file in the notebook tree browser opens the document into a Jupyter notebook editor, where markdown and code cells can be edited separately. An .ipynb file can then be downloaded from the notebook editor, and/or Jupytext can be used to pair markdwon and .ipynb docs from the notebook file menu if you install the Jupytext notebook extension. Jupytext can also be called on the command line to convert .md to .ipynb files. If the markdown file is prefaced with Jupytext YAML metadata (i.e. if the markdown file is a “Jupytext markdown” file, then notebook metadata (which includes cell tags, for example) is preserved in the markdown and can be used for round-tripping between markdown and notebook document formats. (This is handy for RISE slideshows, for example; the slide tags are preserved in the markdown so you can edit a RISE slideshow as a markdown document and then present it via Jupytext and a notebook server.)

In a couple of simple tests I tried, the .ipynb generated from markdown using Jupytext seemed to open okay in the new Netflix Polynote notebook application (early review). This is handy, because Polynote has a WYSIWYG markdown editor… So for anyone who gripes that notebooks are too hard because writing markdown is too hard, this provides an alternative.

I also note that the wrong code language has been selected (presumably the default in the absence of any specified language? So I need to make sure I do tag code cells with a default language somehow… I wonder if Jupytext can do that?).

Having a bunch of markdown documents, or notebooks derived from markdown documents using Jupytext is one thing, providing as it does a set of documents that can be easily edited and interacted with, albeit in the context of a Jupyter notebook server.

However, we can also generate HTML websites based on those documents using tools such as Jupyter Book and nbsphinx. Jupyter Book uses a Jekyll engine to build HTML sites, which is a bit of a pain (I noted a demo here that used CircleCI to build a site from notebooks and md using Jupyter Book) but the nbsphinx Python package that extends the (also pip installable) Sphinx documentation engine is a much easier propostion…

As a proof-of-concept demo, the ouseful-oer/openlearn-learntocode repo contains markdown files generated from the OpenLearn Learn to code for data analysis course.

Whenever the master branch on the repository is updated, CircleCI kicks in and uses nbsphinx to build a documentation site from the markdown docs and pushes them to the repository’s gh-pages branch, which makes the site available via Github Pages: “Learn To Code…” on Github Pages.

What this means is that I should be able to edit the markdown directly via the Github website, or using an online editor such as prose.io connected to my Github account, commit changes and then let CircleCI rebuild the site for me.

(I’m pretty sure I haven’t set things up as efficiently I could in terms of CI; what I would like is for only things that have changed to be rebuilt, but as it is, everything gets rebuilt (although the installed Python environment should be cached?) Hints / tips / suggestions about improving my CircleCI config.yml file would be much appreciated…

At the moment, nbsphinx is set up to run .md files through Jupytext to convert them to .ipynb, which nbsphinx then eventually churns back to HTML. I’ve also disabled code cell execution in the current set up (which means the routing through .ipynb in this instance is superfluous – the site could just be generated from the .md files). But the principle is there for a flick of a switch meaning that the code cells could be executed and their outputs immortalised in the punlished site HTML.

So… what next?

I need to automate the prodcution of the root index file (index.rst) so that the table of contents are built from the parsed OU-XML. I think Sphinx handles navigation menu nesting based on header levels, which is a bit of a pain in the demo site. (It would be nice if there were a Sphinx trick that lets me increase the de facto heading level for files in a subdirectory so that in the navigation sidebar menu each week’s content could be given its own heading and then the week’s pages listed as child pages within that. Is there such a trick?)

Slowly, slowly, I can see the pieces coming together. A tool chain looks possible that will:

  • download OU-XML;
  • generate markdown;
  • optionally, cast markdown as notebook files (via jupytext);
  • publish markdown / (un)executed notebooks (via nbsphinx).

A couple of next steps I want tack on to the end as and when I get a chance and top up my creative energy levels: firstly, a routine that will wrap the published pages in an electron app for different platforms (Mac, Windows, Linux); secondly, publishing the content to different formats (for example, PDF, ebook) as well as HMTL.

I also need to find a way of adding interaction — as Jupyter Book does — integrating something like ThebeLab or nbinteract buttons to support in-page code execution (ThebeLab) and interactive widgets (nbinteract).

Fragment: Transit Mapping

Noticing @edent’s take on a semantic tube map using data from wikidata, I started wondering (again?) about transit map layout engines.

There’s theory behind it (eg Martin Nöllenburg’s Automated Drawing of Metro Maps (2005)) and examples of folk having built tools to support automated layout (eg Automatic layout of schematic transit maps and Transportation maps – creation by optimisation), but I haven’t found a layout engine package that I can make use of (something that plays nice with networkx, and perhaps complements osmnx for getting data out of OpenStreetMap, would be nice… Perhaps even a netwulf filter for laying out transit maps via a GUI?).

Poking around, public-transport/generating-transit-maps links to a couple of repos that style an optimised graph for a couple of German transit routes. There’s a couple of links to possible optimisers / layout engines: this solution in Julia — dirkschumacher/TransitmapSolver.jl — and this nodejs application — juliuste/transit-map. The latter uses a commercial solver, Gurobi, but this post on Optimization Modeling in Python: PuLP, Gurobi, and CPLEX shows equivalent solutions to a (different) optimisation problem using both Gurobi and a free Python solver, PuLP. So it might be straightforward enough to create a Py equivalent of the nodejs solver?

Here’s a more recent package, again in node: gipong/automatic-metro-map.

As far as styling goes, there look to be various things out there. For example, this d3-tube-map, and another: d3-tube. And an HTML5 canvas solution.

Not quite what I had in mind re: layouts, but… wha’….? London tube netwrok as a git graph?!

In passing, having got a map, animating it might be nice… Here’s an old animation package — vasile/transit-map — that might help with that? Or maybe something lifted from this transport routing demo (the Cesium variant does animations along a route I think? More here).

For some 3D relief, harp.gl; and though not directly relevant, I do still like these ridge maps.

Querying DBPedia Linked Data From Jupyter Notebooks – Music Genres Related to Heavy Metal and Music Venues in England

Some time ago I did some examples of querying DBPedia to find related music genres (Mapping Related Musical Genres on Wikipedia/DBPedia With Gephi) as well as other sorts of influence networks (eg programming languages).

After visiting the Black Sabbath exhibition in Birmingham recently (following the awesome Dawn After Dark gig supporting Balaam and the Angel) which had the most dubious of “metal” relationship maps on display in the shop, I thought I’d see how Wikipedia, via DBpedia, mapped that area out.

Gephi’s getting a bit long in the tooth now (netwulf is starting to look handy as a tool for styling networkx graphs; works in Jupyter notebooks too… more on this in another post…) and my original recipe seems to have broken (plus WordPress keeps crapping on the code, removing angle brackets etc), so I started scribbling notes around a recipe for trying to map band genres; it’s ages since I’ve had to try writing SPARQL querues, the notes are very scrappy / fragmentary, and some of the queries are quite possibly nonsense; but FWIW, you can find the notes here: Linked Data bands. I’ll try revisit them and produce some tidier recipes at some point…

I still used Gephi to render the network, though (this was before I found netwulf…). As an example, here’s a map of genres related to Heavey metal music [original svg].

I also started wondering about that other live music related things I might be able to dredge up out of DBPedia queries. One of the categories used to tag entities in DBPedia is Music_venues_in_England, and from that music venues in other, smaller locales; venues are also tagged with geo-co-ordinates (latitude and longitude values) so we can quite easily run a query for music venues in England and from that generate a map.

An example notebook is here: Venues – Linked Data.ipynb. A preview of the map can be found rendered from the gist here.

As I’ve noted previously, (an insight I think Martin Hawksey first made me grok fully), visualisations like this can be great for spotting errors, or gaps, in datasets. For example, Southampton has several other excellent venues aside from the Joiners Arms, and on the Isle of Wight, we have Strings as our local indie venue. [Actually, that might be the wrong version of the map; several venues that I thought were in the map I recall arenlt on that map…]

PS this is neat, from Terence Eden / @edent, a semantic take on the London Tube Map using data from Wikidata, with lines relating topical categories and stations to people’s names: The Great(er) Bear – using Wikidata to generate better artwork. [Picking up on that, some notes on transit mapping.]

Convert Notebook to ThebeLab HTML

A watched issue on Github reminded of something I’d forgotten I’d started to look at — an nbconvert template to convert an .ipynb file to an HTML page that could execute the code against a specified MyBinder provided environment.

(Apparently, the Jupyter Book jupyter-book page path/to/notebook.ipynb command should achieve much the same thing, though I’m not sure where you have to set the Binder repo URL. In a config file?)

FWIW, here’s the nbconvert template I’d started to sketch.

{% extends 'full.tpl'%}

{% block header %}

{{ super() }}


  {
    requestKernel: true,
    binderOptions: {
      repo: "binder-examples/requirements",
    },
  }



{% endblock header %}


{%- block body %}
Activate

var bootstrapThebe = function() {
    thebelab.bootstrap();
}

document.querySelector("#activateButton").addEventListener('click', bootstrapThebe)


{{ super() }}
{%- endblock body %}

It doesn’t quite work at the moment because the <pre> code tag doesn’t carry the correct attributes (though a proposed patch to thebelab may address that).

Chris Holdgraf uses Javascript to rewrite the tags dynamically in a related Jupyter Book template:

            // Find all code cells, replace with Thebelab interactive code cells
            const codeCells = document.querySelectorAll('.input_area pre')
            codeCells.forEach((codeCell, index) => {
                codeCell.setAttribute('data-executable', 'true')
                // Figure out the language it uses and add this too
                var parentDiv = codeCell.parentElement.parentElement;
                var arrayLength = parentDiv.classList.length;
                for (var ii = 0; ii < arrayLength; ii++) {
                    var parts = parentDiv.classList[ii].split('language-');
                    if (parts.length === 2) {
                        // If found, assign dataLanguage and break the loop
                        var dataLanguage = parts[1];
                        break;
                    }
                }
                codeCell.setAttribute('data-language', dataLanguage)
                // If the code cell is hidden, show it
                var inputCheckbox = document.querySelector(`input#hidebtn${codeCell.id}`);
                if (inputCheckbox !== null) {
                    setCodeCellVisibility(inputCheckbox, 'visible');
                }
            });

This can be put in a <script> tag immediately before the Jinja {%- endblock body %} directive.

Example gist here.

Track this issue for more…