Simple Javascript Multiple Choice And Literal Answer Quizzes in Jupyter Notebooks and Jupyter Book

In Helping Students Make Sense of Code Execution and Their Own Broken Code I described a handful of interactive tools to help students have a conversation with a code fragment about what it was doing. In this notebook, I’ll consider another form of interactivity that we can bring to bear in live notebooks as well as static Jupyter Book outputs: simple multiple choice and literal answer quizzes (which I like) and flashcards (which I’ve never really got on with).

Both examples are created by John Shea / @jmshea, whose Intro to Data Science for Engineers Jupyter Book demonstrates some interesting practice.

Keen observers of this blog will note I don’t tend to link to demos of OU materials (only my own that I have drafted in public). That’s because it’s generally authed and only available to paying students. Unlike OU print materials of yore, which could be found in many College libraries, purchased as standalone study packs, or bought second hand from University Book Search. Some materials are on Open Learn, and I keep meaning to give some of them “a treatment” to show other, and to my mind, more engaging, ways in which I think we could present them… When the next strike comes around, maybe…

The jmshea/jupyterquiz package provides support for a range of simple quiz questions that can be used for untracked formative assessment.

Try a Jupyter Book rendering here and a previewed Jupyter notebook rendering here.

The first question type currently supported is a multiple choice question type where a single correct answer is expected (a "multiple_choice" question type).

jupyter-quiz – multiple choice

Hints can also be provided in the event of an incorrect answer being provided:

jupyter-quiz – multiple choice with hint on incorrect answer

The second, related "many_choice" question type requires the user to select M correct correct answers from N choices.

The third quiz type, "numeric" allows a user to check a literal numeric answer and provide a hint if the incorrect answer is provided:

jupyter-quiz – literal answer test with hint on incorrect answer

It strikes me it should be trivial to add an exact string match test, and if Javascript packages are available, simple fuzzy string match tests etc.

The questions and answers can be pulled in from a JSON file hosted locally or retrieved from a URL, or a Python dictionary.

Here’s an example of a "multiple_choice" question type:

{
        "question": "Which of these are used to create formatted text in Jupyter notebooks?",
        "type": "multiple_choice",
        "answers": [
            {
                "answer": "Wiki markup",
                "correct": false,
                "feedback": "False."
            },
            {
                "answer": "SVG",
                "correct": false,
                "feedback": "False."
            },
            {
                "answer": "Markdown",
                "correct": true,
                "feedback": "Correct."
            },
            {
                "answer": "Rich Text",
                "correct": false,
                "feedback": "False."
            }
        ]
    },

And a numeric quiz type:

{
        "question": "The variable mylist is a Python list. Choose which code snippet will append the item 3 to mylist.",
        "type": "multiple_choice",
        "answers": [
            {
                "code": "mylist+=3",
                "correct": false
            },
            {
                "code": "mylist+=[3]",
                "correct": true
            },
            {
                "code": "mylist+={3}",
                "correct": false
            }
        ]
    },

The second package, jmshea/jupytercards, supports the embedding of interactive flash cards in Jupyter notebooks and Jupyter Book.

Clicking on the flashcard turns it to show the other side:

You can also transition from one flashcard to the next:

The flashcards can be loaded from a local or remotely hosted JSON text file listing each of the flash card texts in a simple dictionary:

[
    {
        "front": "outcome (of a random experiment)",
        "back": "An outcome of a random experiment is a result of the experiment that cannot be further decomposed."
    },
    {
        "front": "sample space",
        "back": "The sample space of a random experiment is the set of all possible outcomes."
    },
    {
        "front": "event class",
        "back": "For a sample space $S$ and a probability measure $P$, the event class, denoted by $\\mathcal{F}$, is a collection of all subsets of $S$ to which we will assign probability (i.e., for which $P$ will be defined). The sets in $\\mathcal{F}$ are called events."
    }
]

I’m not sure if you can control the flash card color, or font style, color and size?

What I quite like about these activities is that they slot in neatly into generative workflows: the questions are easily maintained via a source text file or a hidden cell where the JSON data is loaded into a Python dict (I suppose it could even be pulled in from notebook cell metadata) and can then be used in a live (trusted) notebook, a fully rendered notebook (i.e. one rendered by nbviewer, not the Github notebook previweer) or rendered into a Jupyter Book HTML format.

Note to self: add these examples to my Open Jupyter Authoring and Learning Environment (OpenJALE) online HTML book.

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...