Category: OU2.0

Scratch Materials – Using Blockly Style Resources in Jupyter Notebooks

One of the practical issues associated with using the Scratch desktop application (or it’s OU fork, OUBuild) for teaching programming is that runs on the desktop (or perhaps a tablet? It’s an Adobe Air app which I think runs on iOS?). This means that the instructional material is likely to be separated from the application, either as print or as screen based instructional material.

OUBuild

If delivered via the same screen as the application, there can be a screen real estate problem when trying to display both the instructional material and the application.

In OU Build, there can also be issues if you want to have two projects open at the same time, for example to compare a provided solution with your own solution, or to look at an earlier project as you create a new one. The solution is to provide two copies of the application, each running its own project.

Creating instructional materials can also be tricky, requiring the capturing of screenshots from the application and then inserting them in the materials, along with the attendant risk when it comes to updating the materials that screenshots as captured in the course materials may drift from the actuality of the views in the application.

So here are a couple of ways that we might be able to integrate Scratch like activities and guidance into instructional materials.

Calysto/Metakernel Jigsaw Extension for Jupyter Notebooks

The Calysto/Metakernel* Jigsaw extension for Jupyter notebooks wraps the Google Blockly package for use in a Jupyter notebook.

Program code is saved as an XML file, which means you can save and embed multiple copies of the editor within the same Jupyter notebook. This means an example programme can be provided in one embed, and the learner can build up the programme themselves in another, all in the same page.

The code cell input (the bit that contains the %jigsaw line) can be hidden using the notebook Hide Input Cell extension so only the widget is displayed.

The use of the editor is a bit tricky – it’s easy to accidentally zoom in and out, and I’m guessing not very accessible, but it’s great as a scratchpad, and perhaps as an instructional material authoring environment?

Live example on Binderhub

For more examples, see the original Jigsaw demo video playlist.

For creating instructional materials, we should be able to embed multiple steps of a programme in separate cells, hiding the code input cell (that is, the %jigsaw line) and then export or print off the notebook view.

LaTeX Scratch Package

The LaTeX Scratch package provides a way of embedding Blockly style blocks in a document through simple LaTeX script.

Using a suitable magic we can easily add scripts to the document (the code itself could be hidden using the notebook Hide Code Cell Input extension.

(Once again, the code cell input (the cell that contains the lines of LaTeX code) can be hidden using the notebook Hide Input Cell extension so only the rendered blocks are displayed.)

We can also create scripts in strings and then render those using line magic.

Live example on Binderhub

One thing that might be quite interesting is a parser that can take the XML generated from the Jigsaw extension and generate LaTeX script from it, as well as generating a Jigsaw XML file from the LaTeX script?

Historical Context

The Scratch rebuild – OU Build – used in the OU’s new level 1 introductory computing course is a cross platform, Adobe Air application. I’d originally argued that if the earlier taken decision to use a blocks style environment was irreversible, the browser based BlockPy (review and code) application might be a more interesting choice: the application was browser based, allowed users to toggle between blocks and Python code views, displayed Python errors messages in a simplified form, and used a data analysis, rather than animation, context, which meant we could also start to develop data handling skills.

BlockPy

One argument levelled against adopting BlockPy was that it looked to be a one man band in terms of support, rather than the established Scratch community. I’m not sure how much we benefit from, or are benefit to, the Scratch community though? If OU Build is a fork,  we may or may not be able to benefit from any future support updates to the Scratch codebase directly. I don’t think we commit back?

If the inability to render animations had also been a blocker, adding an animation canvas as well as the charting canvas would have been a possibility? (My actual preference was that we should do a bigger project and look to turn BlockPy into a Jupyter client.)

Another approach that is perhaps more interesting from a “killing two birds with one stone” perspective is to teach elementary programming and machine learning principles at the same time. For example, using something like Dale Lane’s excellent Scratch driven Machine Learning for Kids resources.

PS the context coda is not intended to upset, besmirch or provoke anyone involved with OUBuild. It’s self-contempt / self-critical, directed at myself for not managing to engage/advocate my position/vision in a more articulate or compelling way.

Maybe Programming Isn’t What You Think It Is? Creating Repurposable & Modifiable OERs

With all the “everyone needs to learn programming” hype around, I am trying to be charitable when it comes to what I think folk might mean by this.

For example, whilst trying to get some IPython magic working, I started having a look at TikZ, a LaTex extension that supports the generation of scientific and mathematical diagrams (and which has been around for decades…).

Getting LaTeX environments up and running can be a bit of a pain, but several of the Binderhub builds I’ve been putting together include LateX, and TikZ,  which means I have an install-free route trying snippets of TikZ code out.

As an example, in my showntell/maths demo includes an OpenLearn_Geometry.ipynb notebook that includes a few worked examples of how to “write” some of the figures that appear in an OpenLearn module on geometry.

From the notebook:

The notebook includes several hidden code cells that generate the a range of geometric figures. To render the images, go to the Cell menu and select Run All.

To view/hide the code used to generate the figures, click on the Hide/Reveal Code Cell Inputs button in the notebook toolbar.

To make changes to the diagrams, click in the appropriate code input cell, make your change, and then run the cell using the Run Cell (“Play”) button in the toolbar or via the keyboard shortcut SHIFT-ENTER.

Entering Ctrl-Z (or CMD-Z) in the code cell will undo your edits…

Launch the demo notebook server on Binder here.

Here’s an example of one of the written diagrams (there may be better ways; I only started learning how to write this stuff a couple of days ago!)

Whilst tinkering with this, a couple of things came to mind.

Firstly, this is programming, but perhaps not as you might have thought of it. If we taught adult novices some of the basic programming and coding skills using Tikz rather than turtle, they’d at least be able to create professional looking diagrams. (Okay, so the syntax is admittedly probably a bit scary and confusing to start with… But it could be simplified with some higher level, more abstracted, custom defined macros that learners could then peek inside.)

So when folk talk about teaching programming, maybe we need to think about this sort of thing as well as enterprise Java. (I spent plenty of time last night on the Stack Exchange TEX site!)

Secondly, the availability of things like Binderhub make it easier to build preloaded distributions that can be run by anyone, from anywhere (or at least, for as long as public Binderhub services exist). Simply by sharing a link, I can point you to a runnable notebook, in this case, the OpenLearn geometry demo notebook mentioned above.

One of the things that excites me, but I can’t seem to convince others about, is the desirability of constructing documents in the way the OpenLearn geometry demo notebook is constructed: all the assets displayed in the document are generated by the document. What this means is that if I want to tweak an image asset, I can do. The means of production – in the example, the TikZ code – is provide; it’s also editable and executable within the Binder Jupyter environment.

When HTML first appeared, web pages were shonky as anything, but there were a couple of buts…: the HTML parsers were forgiving, and would do their best to whatever corruption of HTML was thrown at them; and the browsers supported the ability to View Source (which still exists today; for example, in Chrome, go to the View menu then select Developer -> View Source).

Taken together, this meant that: a) folk could copy and paste other people’s HTML and try out tweaks to “cool stuff” they’d seen on other pages; b) if you got it wrong, the browser would have a go at rendering it anyway; you also wouldn’t feel as if you’d break anything serious by trying things out yourself.

So with things like Binder, where we can build disposable “computerless computing environments” (which is to say, pre-configured computing environments that you can run from anywhere, with just a browser to hand), there are now lots of opportunities to do powerful computer-ingy things (technical term…) from a simple, line at a time notebook interface, where you (or others) can blend notes and/or instruction text along with code – and code outputs.

For things like the OpenLearn demo notebook, we can see how the notebook environment provides a means by which educators can produce repurposeable documents, sharing not only educational materials for use by learners, or appropriation and reuse by other educators, but also the raw ingredients for producing customised forms of the sorts of diagrams contained in the materials: if the figure doesn’t have the labels you want, you can change them and re-render the diagram.

In a sense, sharing repurposeable, “reproducible” documents that contain the means to generate their own media assets (at least, when run in an appropriate environment: which is why Binderhub is such a big thing…) is a way of sharing your working. That is, it encourages open practice, and the sharing of how you’ve created something (perhaps even with comments in the “code” explaining why you’ve done something in a particular way, or where the inspiration/prior art came from), as well as the what of the things you have produced.

That’s it, for now… I’m pretty much burned out on trying to persuade folk of the benefits of any of this any more…

PS TikZ and PGF TikZ and PGF: TeX packages for creating graphics programmatically. Far more useful than turtle and Scratch?

Open Education Versions of Open Source Software: Adding Lightness and Accessibility to User Interfaces?

In a meeting a couple of days ago discussing some of the issues around what sort of resources we might want to provide students to support GIS (geographical information system) related activities, I started chasing the following idea…

The OU has, for a long time, developed software application in-house that is provided to students to support one or more courses. More often than not, the code is devloped and maintained in-house, and not released / published as open source software.

There are a couple of reasons for this. Firstly, the applications typically offer a clean, custom UI that minimises clutter and is designed in order to support usability for learners learning about a particular topic. Secondly, we require software provided by students to be accessible.

For example, the RobotLab software, originally developed, an still maintained, by my colleague Jon Rosewell was created to support a first year undergrad short course, T184 Robotics and the Meaning of Life, elements of which are still used in one of our level 1 courses today. The simulator was also used for many years to support first year undergrad residential schools, as well as a short “build a robot fairground” activity in the masters level team engineering course.

As well as the clean design, and features that support learning (such as a code stepper button in RobotLab that lets students step through code a line at a time), the interfaces also pay great attention to accessibility requirements. Whilst these features are essential for students with particular accessibility needs, they also benefit all out students by adding to the improved usability of the software as a whole.

So those are two, very good reasons, for developing software in-house. But as a downside, it means that we limit the exposure of students to “real” software.

That’s not to say all our courses use in-house software: many courses also provide industry standard software as part of the course offering. But this can present problems too: third party software may come with complex user interfaces, or interfaces that suffer from accessibility issues. And software versions used in the course may drift from latest releases if the software version is fixed for the life of the course. (In fact, the software version may be adopted a year before the start of the course and then expected to last for 5 years of course presentation). Or if software is updated, this may cause significant updates to be made to the course material wrapping the software.

Another issue with professional software is that much of it is mature, and has added features over its life. This is fine for early adopters: the initial versions of the software are probably feature light, and add features slowly over time, allowing the user to grow with them. Indeed, many latterly added features may have been introduced to address issues surrounding a lack of functionality, power or “expressiveness” in use identfied by, and frustrating to, the early users, particularly as they became more expert in using the application.

For a novice coming to the fully featured application, however, the wide range of features of varying levels of sophistication, from elementary, to super-power user, can be bewildering.

So what can be done about this, particularly if we want to avail ourselves of some of the powerful (and perhaps, hard to develop) features of a third party application?

To steal from a motorsport engineering design principle, maybe we can add lightness?

For example, QGIS is a powerful, cross-platform GIS application. (We have a requirement for platfrom neutrality; some of us also think we should be browser first, but let’s for now accept the use of an application that needs to be run on a computer with a “desktop” applciation system (Windows, OS/X, Linux) rather than one running a mobile operating system (iOS, Android) or eveloped for use by a netbook (Chrome OS).)

The interface is quite busy, and arguably hard to quickly teach around from a standing start:

However, as well as being cross-platform, QGIS also happens to be open source.

That is, the source code is available [github: qgis/QGIS].

 

Which means that as well as the code that does all the clever geo-number crunching stuff, we have access to the code that defines the user interface.

*[UPDATE: in this case, we don’t need to customise the UI by forking the code and changing the UI definition files – QGIS provides a user interface configuration / customisation tool.]

For example, if we look for some menu labels in he UI:

we can then search the source code to find the files that contribute to building the UI:

In turn, this means we can take that code, strip out all the menu options and buttons we don’t need for a particular course, and rebuild QGIS with the simplified UI. Simples. (Or maybe not that simples when you actually start getting into the detail, depending on how the software is designed!)

And if the user interface isn’t as accessible as we’d like it, we can try to improve that, and contribute the imporvements back the to parent project. The advantage there is that if students go on to use the full QGIS application outside of the course, they can continue to benefit from the accessiblity improvements. As can every other user, whether they have accessibility needs or not.

So here’s what I’m wondering: if we’re faced with the decision between wanting to use an open source, third party “real” application with usability and access issues, why build the custom learning app, especially if we’re going to keep the code closed and have to maintain it ourselves? Why not join the developer community and produce a simplified, accessible skin for the “real” application, and feed accessibility improvements at least back to the core?

On reflection, I realised we do, of course, do the first part of this already (forking and customising), but we’re perhaps not so good at the latter (contributing accessibility or alt-UI patterns back to the community).

For operational systems, OU developers have worked extensively on Moodle, for example (and I think, committed to the parent project)… And in courses, the recent level 1 computing course uses an OU fork of Scratch called OUBuild, a cross-platform Adobe Air application (as is the original), to teach basic programming, but I’m not sure if any of the code changes have been openly published anywhere, or design notes on why the original was not appropriate as a direct/redistributed download?

Looking at the Scratch open source repos, Scratch looks to be licensed under BSD 3-clause “New” or “Revised” License (“a permissive license similar to the BSD 2-Clause License, but with a 3rd clause that prohibits others from using the name of the project or its contributors to promote derived products without written consent”). Although it doesn’t have to be, I’m not sure the OUBuild source code has been released anywhere or whether commits were made back to the original project? (If you know differently, please let me know:-)) At the very least, it’d be really handy if there was a public document somewhere that identifies the changes that were made to the original and why, which could be useful from a “design learning” perspective. (Maybe there is a paper being worked up somewhere about the software development for the course?) By sharing this information, we could perhaps influence future software design, for example by encouraging developers to produce UIs that are defined from configuration files that can be easily customised and selected from, in that that users can often select language packs).

I can think of a handful of flippant, really negative reasons why we might not want to release code, but they’re rather churlish… So they’re hopefully not the reasons…

But there are good reasons too (for some definition of “good”..): getting code into a state that is of “public release quality”; the overheads of having to support an open code repository (though there are benefits: other people adding suggestions, finding bugs, maybe even suggesting fixes). And legal copyright and licensing issues. Plus the ever present: if we give X away, we’re giving part of the value of doing our courses away.

At the end of the day, seeing open education in part as open and shared practice, I wonder what the real challenges are to working on custom educational software in a more open and collaborative way?

Keeping Up With What’s Possible – Daily Satellite Imagery from AWS

Via @simonw’s rebooted blog, I  spotted this – Landsat on AWS: “Landsat 8 data is available for anyone to use via Amazon S3. All Landsat 8 scenes are available from the start of imagery capture. All new Landsat 8 scenes are made available each day, often within hours of production.”

What do things like this mean for research, and teaching?

For research, I’m guessing we’ve gone from a state 20 years ago – no data [widely] available – to 10 years ago – available under license, with a delay and perhaps as periodics snapshots – to now – daily availability. How does this imapct on research, and what sorts of research are possible? And how well suited are legacy workflows and tools to supporting work that can make use of daily updated datasets?

For teaching, the potential is there to do activities around a particular dataset that is current, but this introduces all sorts of issues when trying to write and support the activity (eg we don’t know what specific features the data will turn up in the future). We struggle with this anyway trying to write activities that give students an element of free choice or open-ended exploration where we don’t specifically constrain what they do. Which is perhaps why we tend to be so controlling – there is little opportunity for us to respond to something a student discovers for themselves.

The realtime-ish ness of data means we could engage students with contemporary issues, and perhaps enthuse them about the potential of working with datasets that we can only hint at or provide a grounding for in the course materials. There are also opportunities for introducing students to datasets and workflows that they might be able to use in their workplace, and as such act as a vector for getting new ways of working out of the Academy and out of the tech hinterland that the Academy may be aware of, and into more SMEs (helping SMEs avail themselves of emerging capabilities via OUr students).

At a more practical level, I wonder, if OU academics (research or teaching related) wanted to explore the LandSat 8 data on AWS, would they know how to get started?

What sort of infrastructure, training or support do we need to make this sort of stuff accessible to folk who are interested in exploring it for the first time (other than Jupyter notebooks, RStudio, and Docker of course!;-) ?

PS Alan Levine /@cogdog picks up on the question of what’s possible now vs. then: http://cogdogblog.com/2017/11/landsat-imagery-30-years-later/. I might also note: this is how the blogosphere used to work on a daily basis 10-15 years ago…

From the University of the Air to the University of the Cloud…

Skimming over a recent speech given to the European Association of Distance Teaching Universities conference by the OU VC (The future for Open and Distance Universities. Discussing the move from the University of the Air to the University of the Cloud), the following quotes look like they may be handy at some point…

We were disruptive and revolutionary in our use of technology back then [1969], and as we approach our 50th year, we intend to be disruptive and revolutionary again, to transform the life chances of tens of thousands of future students. When we are thinking of change, it is important that our own enthusiasm for it should not run away with itself. It should be for the sake of our students and for our mission.

Disruptive and revolutionary… I wonder either of those mean in practical terms? Or is that still to be defined… In which case… ;-)

At a time of unprecedented change and recognising future economic challenges, we have a crucial role to play in helping employers and employees respond to the rapid rise in automation which is expected to sweep away millions of existing jobs.

The ability for people to upskill and reskill will become crucial in ways we can’t yet predict, and where students will need to be equipped to thrive as digitally-enabled citizens – people who are not just victims of digital change, but people who can take advantage of it.

“[D]igitally-enabled citizens” – defined how?

We can and should help tackle this economic inequality from this employment disruption, and the resulting social inequality, by creating a positive digital learning experience and building essential digital skills – truly modernising our missions for this Century.

How so?

Reflecting on changes to BBC newsroom:

But more significantly using the capabilities of digital media to their full – by which I mean interactivity, direct contribution from the audience, collaborative newsgathering and a levelling of the relationship between institution and audience/consumer.

BBC Me?! ;-)

I recall the BBC’s then UK political editor, Nick Robinson, starting to blog (this was preTwitter). He would post updates after he had picked up initial political intelligence in the morning. He found that political insiders would contact him either privately or online, adding information or possibly contradicting the initial account he had published.

By making his journalism more open and more contingent he gathered more information and tested his thinking, so that by the end of the day when he came to broadcast on the “conventional” broadcast bulletin he would not only have provided a better and faster news service during the day but his final polished TV output would have benefitted by that open testing and development.

T151 was blogged in its production. The content is still there (content from several years ago on http://digitalworlds.wordpress.com). I wish I’d added notes to some related presentations from the time…

[W]e don’t need to invent some radical vision of the future in order to think how we should be changing. Rather we need to look around us carefully now and observe what is interesting and extrapolate from there.

There’s a lot of current world out there that I don’t think we’ve been watching… And a lot of recent past/passed blogged here on OUseful.info over the last 10 years…

So, I suggest, looking at trends in knowledge sectors – publications, books, music – that have changed earlier and faster, such as the news media, can provide lessons for universities. I realise that it can be sacrilegious in some academic circles to draw comparisons with media, content and indeed the news.

Yep. I’d also be looking at things like reproducible research workflows…

News of course is ephemeral and inevitably less perfect or polished than carefully crafted academic content. But there are at least some lessons.

Firstly, the cultural ones. In parts of academia, although thankfully less so in distance and online universities, there is still a patrician culture, de haut en bas, in terms of professional practice. That we are the intellectual priesthood, dispensing tablets of knowledge. Of course we need to treasure our expertise and our standards. But when we are teaching people who are often mature, who have their own experience of life and work, we have to be more modest. And the internet and interactivity keeps us honest and modest.

And we could maybe be more transparent in our working, as per Nick Robinson…

And we need to be aware that we are competing with news media, and other content, for the attention of students, either in the initial choice of whether they sign up for our courses or for their attention when attractive content is drawing them away from their studies once they are taking a course.

Competition in a couple of ways: attention and economic (eg pounds per hour of attention as well as number of hours).

So why don’t we care even more about how readable, how visual, how stimulating and grabby, how entertaining or provocative our courses are?

Or whether anyone even looks at them?

And do our materials always have to be absolutely perfect, especially if perfection is costly and slow, unresponsive and non-topical? Good enough content, I’m afraid to say, has a huge following. Just look at YouTube. And when it is online if it needs improving, it can be done easily.

I think if we are responsive in posting corrections, we can be much quicker in production, and also benefit from “production in presentation” in first run (at least) of courses. Or uncourse them in their production.

I always told BBC journalists and producers that making content attractive was not a contradiction with quality, it is not selling out or dumbing down, it is an essential accompaniment. If you don’t make academic content and the learning experience as stimulating and modern as the other content choices in the lives of students, don’t be surprised if students lose attention or drop out.

Repeated rinse and repeats in drafts and editing take all the character out of our content… And it still goes to students littered with errors and untested by “users” in the first presentation at least…

Of course the immediacy of the feedback of on-line helps enormously as we can know at once what is working for students.

But then, when we get feedback about eg errors in material, in can take till the next presentation of the course a year later for them to be properly addressed. (I don’t know why we can’t A/B test stuff, either? Clinical trials seem to get away with it…)

I hope you can see how many of those cultural and professional practice issues in other content fields have a direct application to universities and distance learning. Too many of us are still working in a mindset where we see digital as a cost effective alternative to the traditional pedagogy of distance learning books and materials.

What’s that saying? Digital isn’t cost effective? Erm…

At the centre of the UK Open University’s changes in the months and years ahead will be to exploit fully the affordances of digital to the learning needs of future society and future students. Of course, we will take into account concerns about delivering for our existing students and make sure that the transition to that more fully digitally designed world is carried out carefully, carrying them with us.

So what are the “affordances of the digital”? I can think of a view but they are predicated on changed production and presentation models together

[I]t is not the radical, niche technologies that should interest us, but rather those that have the possibility to become, as Shirky has it, ‘boring’. The basic attributes of digital that can reform learning have not changed significantly since the beginning of social media about ten years ago. It is just that they are not fully adopted in our learning practices.

Still not sure what the point is here? Such as…?

With this in mind I will also add the usual caveat that attempting to predict the future is nearly always foolhardy, and so I will limit my conjectures to thinking about two aspects: the main areas that we might suggest will drive change within open and distance universities; and the context within which those universities are operating.

Best way to predict is invent; next best way is to explore the stuff other folk are inventing. That’s partly what OUseful.info is about…

To look at the first of these, what are the current trends, developments or technologies that might represent what William Gibson described as the future that is already here.

There are three broad elements of particular interest to open and distance universities that I will highlight, although there are undoubtedly more we could address. These are Data, Openness and Flexibility.

To take the first of these, data, it is a commonplace to observe that the generation, analysis and interpretation of data is now a significant factor in society in a manner it was not just ten years ago. There is talk of data capitalism, data surveillance and data as the new oil. But what does this mean for universities, and in particular ones operating at a distance?

There are undoubted benefits we can give to our students in a data rich world, via learning analytics. At the Open University we are aligning analytics with learning design to help us inform which designs are more effective in retaining students and meeting their needs.

We can tell which elements of a course are aligned with effective performance and which ones are less well correlated. This is the type of feedback we have never managed before when we were sending out boxes of printed materials. The critical thing is to show students that their experience with something that for some of them is less familiar is going to create benefits for them.

I still don’t know if anyone ever reads a particular page, clicks on a particular link, etc etc…

And this type of feedback changes the definitions of our engagement with students and our ability to be able to respond to their needs. Our previous techniques for capturing student feedback would involve them completing a written, then later online, survey after taking a module, quite often a long time after their learning experience in question. Those feedback methods inevitably require some effort on the part of the student and the face to face focus group necessarily involves a behaviour – travelling to a physical point – that inevitably excludes certain categories of students.

We are now introducing much more immediate forms of response (I’m not sure that feedback is an accurate term any more, as this is now a less deliberate process for students) We are capturing immediate response data. For instance on our Student Home help page students are asked to click a simple green thumbs-up or red thumbs-down to indicate whether their query has been answered effectively.

Our teams monitor those “thumbs” in real time and refine responses in turn and feedback issues immediately to the learning/module teams. We intend to roll out this approach from our student experience site to all of the virtual learning environment next year, in time for our main autumn presentation, so that we can be responding to students and improving their learning experience in real time.

We are also able to use data to help inform our tutors, our Associate Lecturers, about their students. Of course, Associate Lecturers have their own direct relationships with students who are studying most intensively or enthusiastically – but it is the students who are not engaging and the data that is not being created on our system that can help tutors intervene positively.

And we should also be generous and non-proprietary with the data we give to students to help them monitor and shape their own learning.

We should also be more thoughtful about who we divulge student data to, eg though the use of third party tracking services where we reveal student behaviours to third parties, who then sell the data back to us. (And if they don’t sell it to us, how are they generating revenue from it?)

To now consider Openness. Openness now comes in many different forms, it is not just about the open access to higher education it was when the OU was founded. Now it covers open educational resources, MOOCs, open access publications, open textbooks and open educational practice.

In this, open universities need to continue to adapt and be involved in the changing nature of openness in higher education. The adoption of elements of openness across the higher education sphere really hints at a much bigger shift, which is the blurring of boundaries.

This brings me onto the third element, that of flexibility. This can come in many different forms. The open model of education has always been about flexibility – allowing students to choose from a range of courses, to take a break in their study, to combine different size courses.

However, we need to challenge ourselves. When we have asked our students and our potential students about flexibility they have told us that the flexibility is often only a flexibility that is on the university’s terms, not on theirs. Some students want to speed up their study, others want to be able to slow it down. Some want the option to be able to do both, according to the circumstances of their lives. And this is where digital’s infinite flexibility will be the servant of the student’s demand for flexibility.

This challenges the traditional assumptions of the academic year that are still built into the mindset of many academics. And it challenges us to offer a varied and flexible experience that might make us have to be more flexible than we have been used to.

I fancy the idea of alumni as lifelong learners, paying  a subscription to access all our content (think: Netflix), perhaps including course materials that are also currently in production (if we can’t be so open as to draft out materials, and try them out, in public), chunked in tiny chunks (say, 30 mins of “attention time”, or so). We could track the popular pathways – there may be new courses or market intelligence in them…

I come from a digital news media environment where the expectation of immediate high quality content on the terms of the audience were gradually adopted by the organisation – an organisation that had been used to serving the news at a time when the BBC was ready to give it to people. That revolution happened in news at least 15 years ago. Universities are just about catching up.

But we will in the future push this flexibility further as students and employers demand it. For instance we are, as many of you are I expect, exploring flexible forms of Assessment. Can we accredit much more learning from elsewhere? Can we assess and offer credit for practical learning from the workplace on a much more systematic and responsive basis? Can we give the student a more flexible choice of assessment? Are we prepared to move from assessment “of learning” to assessment “for learning”?

Just to note, BXM871 – Managing in the digital economy: “This module offers a process to gain academic credit for your study of The Open University MOOCs that comprise the FutureLearn Digital Economy program. Your knowledge, understanding and skills from the MOOCs will be supplemented by learning materials supporting critical thinking, reflection and study skills appropriate to masters level assessment. You will have access to ‘light touch’ advice from a learning advisor, but please be aware that (as with the MOOCs you bring to the module as prior learning) you need to be a proactive learner to benefit from the materials and activities supplied (peer-review, case studies, readings and online discussion). Activities and assessment address your own professional situation, culminating in an extended written assignment integrating your prior MOOC learning in the context of challenges posed by technological change.”

The use of data, open resources and artificial intelligence has the potential to offer students different types of content within an overall course structure, better personalised to their interests and needs.

Oh, God, no, please not AI Snake Oil…

On the changing economics and business models, if we were following tech, we’d be looking for two-sided market opportunities. But do we really want to do that..?

We need to consider these three elements in relation to a final aspect – the context within which universities operate, and the changing nature of society.

We live in a world where fake news and the negative role of social media sometimes determine public policy. I suspect that quite a large number of us in this room were naturally early techno-optimists. But as the polarising, degrading and demeaning aspects of extreme opinions and abusive content online undermine the cohesion of societies I believe that there is a natural swing towards techno-pessimism.

But the overwhelming shift towards a digital world cannot be held back just because we have some reservations and we should not despair. We need to be as committed to creating a constructive information society in the digital world as we have been over centuries IRL. And we will succeed in our civilising role.

All universities, but particularly I believe, open and distance ones who have a purpose in educating the wider population, have a particular role in helping to produce graduates who understand how to make effective use of these tools in their education, but also in being good networked citizens.

I always liked the strapline of the Technology Short Course Programme – “Relevant Knowledge”. I also think folk should leave our courses knowing how to do things, or seeing how some “big ideas” could help them in the workplace. In short, we should be equipping people to engage critically, as well as productively, with technology.  As it is, I’m not convinced we always deliver on that…:-(

Here at The Open University we are trying to respond to these challenges while retaining our core mission of offering higher education to all, regardless of background or previous qualifications.

We want to transform the University of the Air envisaged by Harold Wilson in the 1960s to a University of the Cloud – a world-leading institution which is digital by design and has a unique ability to teach and support our students in a way that is responsive both to their needs and those of the economy and society.

Open and Distance education universities face an exciting and challenging time. Exciting in that they hold much of the expertise and practice needed to address many of the challenges facing higher education and society in general. Challenging in that they no longer hold a monopoly on much of this and must adapt to new market forces and pressures.

I like a lot of those words. But I’ve no idea (really; really no idea, at all) what anyone else thinks they might mean. (I’m guessing it’s not what I think they mean! ;-)

 

Authoring Interactive Diagrams and Explorable Explanations

One of the things that the OU has always tended to do well is create clear – and compelling – diagrams and animations to help explain often complex topics. These include interactive diagrams that allow a learner to engage with the diagram and explore it interactively.

At a time when the OU is looking to reduce costs across the board, finding more cost effective ways of supporting the production, maintenance, presentation and updating of our courses, along with the components contained within them, is ever more pressing.

As a have-a-go technology optimist, I’m generally curious as to how technology may help us come up with, as well as produce, such activities.

I’m a firm believer in using play as a tool for self-directed discovery and learning, and practise as a way of identifying or developing, erm, new practise, and I’m also aware that new technology and tools themselves can sometimes require a personal time investment before you start to get productive with them. However, for many, if you don’t get to play often, knowing how to install or start using a new piece of software, let alone how to start playing with once you are in, can be a blocker. And that’s if you’ve got – or make – the time to explore new tools in the first place.

Changing a workflow is also not just down to one person changing their own practise – it can heavily depend on immediate downstream factors, such as what the person you hand over your work to is expecting from you in order for them to do their job.

(Upstream considerations can also make life more or less easy. For example, if you want to analyse a data set that the person before you has handed over as a table in a PDF document, you have to do work to get the data out of the document before you can analyse it.)

And that’s part of the problem: because tech can often help in several ways, but is sometimes most effective when you change the whole process; and if you stick with the old process, and just update one step of the workflow, that can often makes things worse, not better.

Sometimes, a workflow can just be bonkers. When we produced material for the FutureLearn Learn to Code MOOC, we used an authoring tool that could generate markdown content. The FutureLearn authoring environment is (I was told) a markdown environment. I was keen to explore an authoring route that would let us publish from the authoring environment to FutureLearn (in the absence of a FutureLearn API, I’d have been happy to finesse one by scraping form controls and bodging my own automation route.) As it was, we exported content from the markdown producing environment into Word, iterated through it there with the editor (introducing errors into code elements), and then someone cut and pasted the content into the FutureLearn editor, presumably restyling it as they did so. Then we had to fix the errors that were either introduced by the editing process, or made it through the editing process, by checking back against code in the original authoring environment. The pure markdown workflow was stymied because even though we could produce markdown, and FutureLearn could (presumably) accept it, the intermediate workflow was a Word based one. (The lesson from this? Innovation can be halted if you have to use legacy processes in a workflow rather than reengineering all of it.)

The OU-XML authoring route has similar quirks: authors typically author in Word, then someone has to copy, paste and retag the content in an XML authoring tool so it’s marked up correctly.

But that’s all by the by, and more than enough for the subject of another post…

Because the topic of this post is a quick round-up of some tools that support the creation – and deployment – of interactive diagrams and explorable explanations. I first came across this phrase in a 2011 post by Bret Victor – Explorable Explanations, and I’ve posted about them a couple of times (for example, Time to Revisit Tangle?).

One of the most identifiable aspects of many explorable explanations are interactive diagrams where you can explore some dynamic feature of an explanation in an interactive way. For example, exploring the effect of changing parameter values in an equation:

One of the things I’m interested in are frameworks and environments that support “direct authoring” of interactive components that could be presented to students. Ideally, the authoring environment should produce some sort of source code from which the final application can be previewed as well as published. Ideally, there should also be separation between style and “content”, allowing the same asset to be rendered in multiple ways, (this might include print as well as online static or interactive content).

Unfortunately, in many cases, direct authoring is replaced by a requirement to use some sort of “source code”. (That’s partly because building UIs that naive users can use can be really difficult, especially if those users refuse to use the UI because it’s a bit clunky. Even if the code the UI generates, which is the thing you actually want to produce, is actually quite simple and it would be much easier if authors wrote that source code directly.)

For example, I recently came across Idyll [view the code and/or read the docs], a framework for creating interactive documents. See the following couple of examples to get a feel for what it can do:

The example online editor gives an example of the markup language (markdown, with extensions) and the rendered, interactive document:

(It’d be quite interesting to see how closely this maps onto the markdown export from a Jupyter notebook that incorporates ipywidgets.)

Moving the sliders in the rendered document changes the variable values and dynamically replots the curve in the chart.

I can see Idyll becoming a component of the forthcoming OpenCreate tool, so it’ll be interesting if anyone else can – partly because it would presumably require downstream buy-in into using the interactive components Idylll bundles with.

Whilst Idyll is a live project, the next one – Apparatus –  looks to have stalled. It has good provenance, though, with one of the examples coming from Bret Victor himself.

Here’s an example of the sort of thing it can produce:

The view can also reveal the underlying configuration:

The scene is built up from a set of simple objects, or previously created objects (for example, the “Wheel with mark” This feature is important because it encourages another useful behaviour amongst new users: it encourages you to create simple building blocks that do a particular thing, and then assemble those building blocks to help you do more complex things later on.

The apparatus “manual” fits in one diagram:

The third tool – Loopy – also looks like it may be recently stalled (again, code is available and the UI is via a browser). This tool allows for the creation, through direct manipulation, of  a particular sort of “systems diagram” where influence at one node can positively or negatively influence another node:

To create a node, simply draw a circle; to connect nodes, draw a line from one node to another.

You can set the weight, positive or negative:

 

As well as adding and editing text, and moving or deleting items:

You can also animate the diagram, feeding in positive or negative elements from one item and seeing how those changes feed through to influence the rest of the system:

The defining setup of the diagram can be saved in a URI and then shared.

All three of these applications encourage the use to explore a particular explanation.

Apparatus and LOOPY both provide direct authoring environments that allow the user to create their own scenes through adding objects to a canvas, although Apparatus does require the user to add arithmetic or geometrical constraints to some items when they are first created. (Once a component has been created, it can just be reused in another diagram.)

Apparatus and LOOPY also carry their own editor with them, so a user could change the diagram themselves. In Idyll, you would need access to the underlying enhanced markdown.

If you know of any other browser based, open source frameworks for creating and deploying standalone, iframe/web page embeddable interactive diagrams and explorable explanations, please let me know via the comments.

PS for a range of other explorable explanations, see this awesome list of explorables.

Fragment – DIT4C – Docker Base Containers for Edu Remote Computing Labs

What’s an effective way of helping a student run a desktop application when their own computer won’t run the application, for whatever reason, locally? Virtualised software, running remotely, provides one solution. So here’s an example of a project that looks at doing just that:  DIT4C (“Data Intensive Tools for the Cloud”)a platform for hosting data analysis tools “in the cloud” using containers [repo].

Prepackaged, standalone containers are defined for a range of applications, including RStudio, Jupyter notebooks, Jupyter+R and OpenRefine

Standalone Containers With Branded Landing Page

The application containers are built on top of a base container that includes an nginx webser/proxy, a GoTTY shell and a file uploader. The individual containers then have a “homepage” that links to the particular application:

So what do we have at this point?

  • a branded landing page;
  • browser accessed shell:
  • a browser accessed file uploader:

These services are all running within a single container. I don’t know if there’s a way of linking multiple containers using docker-compose? This would require finding some way of announcing the services provided by each container, to a central nginx server which could then link to each from a single homepage. But this would mean separate terminals and file loaders into each one (though maybe the shared files could be handled as a single mounted volume shared across all the linked containers?

Once again, I’m coming round to the idea that using a single container to run multiple services, rather than several linked containers each running a single service, is simpler, even if it does go against the (ideal?) model of using containers as part of a small pieces, loosely joined architecture? I think I need to post a simple recipe (or recipes) somewhere that show different ways of running multiple services within a single container. The docker docs  – Run multiple services in a container – provide a crib in to this at the moment.

X11 Applications

Skimming the docs, I notice reference to a base X11 desktop container. Interesting… I have a PhD student looking for an easy way to host a Qt widget running application in the cloud for evaluation purposes. To this end I’ve just started looking around for X11/noVNC web client containers that would allow us to package the app in a simple container then access it from something like Digital Ocean (given there’s no internal OU docker container hosting service that I’m allowed to access (or am aware of… Maybe on the Faculty cluster?)).

So things like this show the way – a container that offers a link to a containerised “desktop” application, in this case QGIS (dit4c/dockerfile-dit4c-container-qgis); (does the background colour mean anything, I wonder? How could we make use of background colour in OU containers?):


Following the X11 Session link, we get to a desktop:

There’s an icon in the toolbar to the application we want – QGIS:

What I’m thinking now is this could be handy for running the V-REP robot simulator, and maybe Gephi…

It also makes me think that things could be simplified a little further by offering a link to QGIS, rather than X11 Application, and opening the application in full screen mode (on the virtualised desktop) on start-up. (See Distributing Virtual Machines That Include a Virtual Desktop To Students – V-REP + Jupyter Notebooks for some thoughts on how to use VMs to distribute a single pre-launched on startup desktop application to try to simplify the student experience.)

It also makes me even more concerned about the apparent lack of interest in the OU, and even awareness of, the possibilities of virtualised software offerings. For example, at a recent SIG group on (interactive) maps/mapping, brief mention was made of using QGIS, and problems arising therefrom (though I forget the context of the problems). Here we have a solution – out there for all to see and anyone to find – that demonstrates the use of QGIS in a prebuilt container. But who, internally, would think to mention that? I don’t think any of the Tech Enhanced Learning folk I’ve spoken to would even consider it, if they are even aware of it as an option?

(Of course, in testing, it might be rubbish… how much bandwidth is required for a responsive experience when creating detailed maps? See also one of my earlier related experiments: Accessing GUI Apps Via a Browser from a Container Using Guacamole, which remotely accessing the Audacity audio editor using a cloud hosted container.)

The Platform Offering

Skimming through the repos, I (mistakenly, as it happens) thought I saw a reference to resbaz (ResBaz Cloud – Containerised Research Apps as a Service). I was mistaken in thinking I had seen a reference in the code I skimmed though, but not, it seems, in the fact that there is a relationship:

And so it seems that perhaps more interestingly than the standalone containers is that DIT4C is a platform offering (architecture docs), providing authenticated access to users, file persistence (presumably?) and the ability to launch prebuilt docker images as required.

That said, looking at the Github repository commits for the project, there appears to have been little activity since March 2017 and the gitter channel appears to have gone silent at the end of 2016. In addition, the docs for getting an instance of the platform up and running are a little bit too sparse for me to follow easily… [UPDATE: it seems as if the funding did run out/get pulled:-(]

So maybe as a project, DIT4C is perhaps now “of historical interest” only, rather than being a live project we might have been able to jump on the back of to get an OU hosted remote computing lab up and running? :-( That said, the ResBaz (Research Bazaar) initiative, “worldwide festival promoting the digital literacy emerging at the center of modern research”, still seems to be around…