The programming related courses I work on are probably best described as introductory programming courses. Students are taught using an approach based on “a line of code at a time” within a Jupyter notebook environment which provides a REPL execution model. Students are encouraged to write a line of code in a cell, run it, and then inspect state changes arising from the code execution as displayed in the code cell output. Markdown cells before an after the code cell are use to explain the motivation for the next bit of code, or prompting students to predict what they think it will do. Markdown cells following a code cell can be use to review or explain what just happened, or prompt students to reflect on what they think happened.
In passing, I note that there are other models for providing text+code style annotations. For example the pycco-docs/pycco
package will render side-by-side comments and code:
The view is generated from Python files containing inline comments and docstrings:
Something I haven’t yet tried is a workflow that renders the side-by-side view from a Python file generated from a Jupyter notebook using the jupytext
file converter (I’m not sure if Jupytext can generate the python files using comment markup conventions that pycco
expects?)
For simple code blocks, tools such as nbtutor
provide a simple code stepper and tracer that can be used to explore the behaviour of a few lines of code.

I use nbtutor
in some first year undergraduate notebooks and it’s okay, -ish (unfortunately it can break in combination with some other widgets running in the same notebook).
Another approach I am keen to explore in terms of helping students help themselves when it comes to understanding code they have written is the automated generation of simple flowchart visualisations from code fragements (see for example Helping Learners Look at Their Code).
Poking around looking for various Python packages that can help animate or visualise common algorithms (Bjarten/alvito
is one; anyone got suggestions for others?) I came across a couple of other code stepping tools produced by Alex Hall / @alexmojaki.
The first one is alexmojaki/birdseye
which can provide a step trace for code executed in a magicked notebook cell block:
You can also separately step though nested loops:
Another tool, alexmojaki/snoop
, will give a linear trace from an executed code cell:
Alex also has a handy package for helping identify out of data Python packages based on the latest version availble on PyPi: alexmojaki/outdated
.
When it comes to Python errors, for years we have used the Jupyter skip-traceback
extension to minimise the traceback message displayed when an error is raised in a Jupyter notebook. However, there are various packages out there that attempt to provide more helpful error messages, such as SylvainDe/DidYouMean-Python
(which is currently broken from the install – I think the package needs its internal paths fettling a bit!) and friendly-traceback
. The latter package tidies up the display of error messages:
Note that the pink gutter to indicate failed cell execution comes from the innovationOUtside/nb_cell_execution_status
extension.
You can then explore in more detail what the issue is and in some cases, how you might be able to fix it:
You can also start to tunnel down for more detail about the error:
This extension looks like it could be really handy in an introductory, first year undergraduate intro to programming module, but the aesthetic may be a bit simplistic for higher level courses.
From the repo, friendly-traceback/friendly-traceback
, it looks like it shouldn’t be too hard to create your own messages.
This does make me wonder whether a language pack approach might be useful? That would allow for internationalisation but could also be used to easily support the maintenance of custom message packs for particular teaching and learning use cases?
With a couple of new modules presenting for the first time this year, I would argue we missed an opportunity to explore some of these ideas where we can start to use the technology as an illustrator of what’s going on with code we give to students, and more importantly that students might write for themselves.
There are several reasons why I think this probably hasn’t happened:
- no time to explore this sort of thing (with two years+ to produce a course, you might want to debate that…);
- no capacity in a module team to explore and test new approaches (I’d argue that’s our job as much as producing teaching material if the org is a beacon of best practice in the development and delivery of interactive online distance education materials);
- no capacity in support units to either research their effectiveness or explore such approaches and make recommendations into module teams about how they might be adopted and used, along with an examples gallery and sample worked examples based on current draft materials (I honestly wonder about where all the value add we used to get from support units years ago has gone and why folk don’t think we are the worse for not having units that explore emerging tech for teaching and learning. Folk too busy doing crapalytics and bollockschain, I guess);
- and I guess: “what value does it add anyway?” (which is to say: “why should we explore new ways of teaching and learning?”) and “you’re just chasing ooh, shiny” (which really doesn’t fit with 2+ year production cycles and material updates every five years where locking into an emerging technology is high risk, becuase rather than regularly updating around it, you are stuck with it for potentially up to a decade (2 years production, five years primary course life, 3 years course life extension)).
Bored, bored, bored, bored, bored…