Idle Reflections on Sensemaking Around Sporting Events, Part 1: Three Phases of Sports Event Journalism

Tinkering with motorsport data again has, as is the way of these things, also got me thinking about (sports) journalism again. In particular, a portion of what I’m tinkering with relates to ideas associated with "automated journalism" (aka "robot journalism"), a topic that I haven’t been tracking so much over the last couple of years and I should probably revisit (for an previous consideration, see Notes on Robot Churnalism, Part I – Robot Writers).

But as well as that, it’s also got me thinking more widely about what sort of a thing sports journalism is, the sensemaking that goes on around it, and how automation might be used to support that sensemaking.

My current topic of interest is rallying, most notably the FIA World Rally Championship (WRC), but also rallying in more general terms, including, but not limited to, the Dakar Rally, the FIA European Rally Championship (ERC), and various British rallies that I follow, whether as a fan, spectator or marshal.

This post is the first in what I suspect will be an ad hoc series of posts following a riff on the idea of a sporting event as a crisis situation in which fans want to make sense of the event and journalists mediate, concentrate and curate information release and help to interpret the event. In an actual crisis, the public might want to make sense of an event in order to moderate their own behaviour or inform actions they should take, or they may purely be watching events unfold without any requirement to modify their behaviour.

So how does the reporting and sensemaking unfold?

Three Phases of Sports Event Journalism

I imagine that "event" journalism is a well categorised thing amongst communications and journalism researchers, and I should probably look up some scholarly references around it, but it seems to me that there are several different ways in which a sports journalist could cover a rally event and the sporting context it is situated in, such as a championship, or even a wider historical context ("best rallies ever", "career history" and so on).

In Seven characteristics defining online news formats: Towards a typology of online news and live blogs, Digital Journalism, 6(7), pp.847-868, 2018, Thorsen, E. & Jackson, D. characterise live event coverage in terms of "the vernacular interaction audiences would experience when attending a sporting event (including build-up banter, anticipation, commentary of the event, and emotive post-event analysis)".

More generally, it seems to me that there are three phases of reporting: pre-event, on-event, and post-event. And it also seems to me that each one of them has access to, and calls on, different sorts of dataset.

In the run up to an event, a journalist may want to set the championship and historical context, reviewing what has happened in the season to date, what changes might result to the championship standings, and how a manufacturer or crew have performed on the same rally in previous years; they may want to provide a technical context, in terms of recent updates to a car, or a review of how the environment may affect performance (for example, How very low ambient temperatures impact on the aero of WRC cars); or they may want to set the scene for the sporting challenge likely to be provided by the upcoming event — in the case of rallying, this is likely to include a preview of each of the stages (for example, Route preview: WRC Arctic Rally, 2021), as well as the anticipated weather! (A journalist covering an international event may also consider a wider social or political view around, or potential economic impact on, the event location or host country, but that is out-of-scope for my current consideration.)

Once the event starts, the sports journalist may move into live coverage as well as rapid analysis, and, for multi-day events, backward looking session, daily or previous day reviews and forward looking next day / later today upcoming previews. For WRC rallies, live timing gives updates to timing and results data as stages run, with split times appearing on a particular stage as they are recorded, along with current stage rankings and time gaps. Stage level timing and results data from large range of international and national rallies is more generally available, in near real-time, from the ewrc-results.com rally results database. For large international rallies, live GPS traces with update refreshes of ervy few seconds for the WRC+ live tracker map, also provide a source of near real time location data. In some cases, "champaionship predictions" will be available shwoing what the championship status would be if the event were to finish with the competitors in the current positions. One other feature of WRC and ERC events is that drivers often give a short, to-camera interviews at the end of each stage, as well as more formal "media zone" interviews after each loop. Often, the drivers or co-drivers themseleves, or their social media teams, will post social media updates, as will the official teams. Fans on-stage may also post social media footage and commentary in near real-time. The event structure also allows for review and preview opportunities througout the event. Each day of a stage rally tends to be segmented into loops, each typically of three or four stages. Loops are often repeated, typically with a service or other form of regroup, (including tyre and light fitting regroups), in-between. This means that the same stages are often run twice, although in many cases the state of the surface may have changed significantly between loops. (Gravel roads start off looking like tarmac; they end up being completely shredded, with twelve inch deep and twelve inch wide ruts carved into what looks like a black pebble beach…)

In the immediate aftermath of the event, a complete set of timing and results data will be available, along with crew and team boss interviews and updated championship standings. At this point, there is an opportunity for a quick to press event review (in Formula One, the Grand Prix + magazine is published within a few short hours of the end of the race), followed by more leisurely analysis of what happened during the event, along with counterfactual speculation about what could have happened if things had gone differently or different choices had been made, in the days following the event.

Throughout each phase, explainer articles may also be used as fillers to raise general background understanding of the sport, as well as specific understanding of the generics of the sport that may be relevant to an actual event (for example, for a winter rally, an explainer article on studded snow tyres).

Fractal Reporting and the Macroscopic View

One thing that is worth noting is that the same reporting structures may appear at different scales in a multi-day event. The review-preview-live-review model works at the overall event level, (previous event, upcoming event, on-event, review event), day level (previous event, upcoming day, on-day, review day), intra-day level (previous loop, upcoming loop, on-loop, review loop), intra-session level (previous stage, upcoming stage, on-stage, review stage) and intra-stage level (previous driver, upcoming driver, on-driver, review driver).

One of the graphical approaches I value for exploring datasets is the ability to take a macroscopic view, where you can zoom out to get an overall view of an event as well as being bale to zoom in to a particular part of the event.

My own tinkerings will rally timing and results information has the intention not only of presenting the information in a summary form as a glanceable summary, but also presenting the material in a way that supports story discovery using macroscope style tools that work at different levels.

By making certain things pictorial, a sports journalist may scan the results table for potential story points, or even story lines: what happened to driver X in stage Y? See how driver Z made steady progress from a miserable start to end up finishing well? And so on.

Rally timing and stage results review chartable.

The above chart summarises timing data at an event level, with the evolution of the rally positions tracked at the stage level. Where split times exist within a stage, a similar sort of chartable can be used to summarise evolution within a stage by tracking times at the splits level.

These "fractal" views thus provide the same sort of view over an event but at different levels of scale.

What Next?

Such are the reporting phases available to the sports journalist; but as I hope to explore in future posts, I believe there is also a potential for crossover in the research or preparation that journalists, event organisers, competitors and fans alike might indulge in, or benefit from when trying to make sense of an event.

In the next post in this series, I’ll explore in more detail some of the practices involved in each phase, and start to consider how techniques used for collaborative sensemaking and developing situational awareness in a crisis might relate to making sense of a sporting event.

Automatically Detecting Corners on Rally Stage Routes Using R

One of the things I’ve started pondering with my rally stage route metrics is the extent to which we might be able to generate stage descriptions of the sort you might find on the It Gets Faster Now blog. The idea wouldn’t necessarily be to create finished stage descriptions, more a set of notes that a journalist or fan could use as a prompt to create a more relevant description. (See these old Notes on Robot Churnalism, Part I – Robot Writers for a related discussion.)

So here’s some sketching related to that: identifying corners.

We can use the rLFT (Linear Feature Tools) R package to calculate a convexity measure at fixed sample points along a route (for a fascinating discussion of the curvature/convexity metric, see *Albeke, S.E. et al. Measuring boundary convexity at multiple spatial scales using a linear ‘moving window’ analysis: an application to coastal river otter habitat selection Landscape Ecology 25 (2010): 1575-1587).

By filtering on high absolute convexity sample points, we can do a little bit of reasoning around the curvature at each point to make an attempt at identifying the start of a corner:

library(rLFT)

stepdist = 10
window = 20
routeConvTable <- bct(utm_routes[1,],
                      # distance between measurements 
                      step = stepdist,
                      window = window, ridName = "Name")

head(routeConvTable)

We can then use the convexity index to highlight the sample points with a high convexity index:

corner_conv = 0.1

tight_corners = routeConvTable[abs(routeConvTable$ConvexityIndex)>corner_conv,]
tight_corners_zoom1 = tight_corners$Midpoint_Y>4964000 & tight_corners$Midpoint_Y<4965000

ggplot(data=trj[zoom1, ],
       aes(x=x, y=y)) + geom_path(color='grey') + coord_sf() +
  geom_text(data=tight_corners[tight_corners_zoom1,],
                           aes(label = ConvexityIndex,
                               x=Midpoint_X, y=Midpoint_Y),
                           size=2) +
  geom_point(data=tight_corners[tight_corners_zoom1,],
             aes(x=Midpoint_X, y=Midpoint_Y,
                 color= (ConvexityIndex>0) ), size=1) +
  theme_classic()+
  theme(axis.text.x = element_text(angle = 45))
High convexity points along a route

We can now do a bit of reasoning to find the start of a corner (see Automatically Generating Stage Descriptions for more discussion about the rationale behind this):

cornerer = function (df, slight_conv=0.01, closeby=25){
  df %>%
    mutate(dirChange = sign(ConvexityIndex) != sign(lag(ConvexityIndex))) %>%
    mutate(straightish =  (abs(ConvexityIndex) < slight_conv)) %>%
    mutate(dist =  (lead(MidMeas)-MidMeas)) %>%
    mutate(nearby =  dist < closeby) %>%
    mutate(firstish = !straightish & 
                        ((nearby & !lag(straightish) & lag(dirChange)) |
                        # We don't want the previous node nearby
                        (!lag(nearby)) )  & !lag(nearby) )
}

tight_corners = cornerer(tight_corners)

Let’s see how it looks, labeling the points as we do so with the distance to the next sample point:

ggplot(data=trj[zoom1,],
       aes(x=x, y=y)) + geom_path(color='grey') + coord_sf() +
  ggrepel::geom_text_repel(data=tight_corners[tight_corners_zoom1,],
                           aes(label = dist,
                               x=Midpoint_X, y=Midpoint_Y),
                           size=3) +
  geom_point(data=tight_corners[tight_corners_zoom1,],
             aes(x=Midpoint_X, y=Midpoint_Y,
                 color= (firstish) ), size=1) +
  theme_classic()+
  theme(axis.text.x = element_text(angle = 45))
Corner entry

In passing, we note we can identify the larg gap distances as "straights" (and then perhaps look for lower convexity index corners along the way we could label as "flowing" corners, perhaps).

Something else we might do is number the corners:

Numbered corners

There’s all sorts of fun to be had here, I think!

Personally Learning

Notes and reflections on a curiosity driven personal learning journey into geo and rasters and animal movement trajectory categorisation and all sorts of things that weren’t the point when I started…

Somewhen over the last month or so, I must have noticed a 3D map produced using the rayshader R package somewhere because I idly started wondering about whether I could use it to render a 3D rally stage map.

Just under three weeks ago, I started what was intended to be a half hour hack to give it a go, and it didn’t take too long to get something up and running…

Rally stage map rendered using rayshader

I then started tinkering a bit more and thinking about what else we might be able to do with linear geographies, such as generating elevation along route maps, for example, and also started keeping notes on various useful bits and bobs along the way: some notes on how geographic projections work, for example (which has been something of a blocker to me in the past) or how rasters work and how to process them.

I also had to try to get my head around R again (it’s been several years since I last used it) and started pondering about a useful way to structure my notes and then publish them somewhere: bookdown was the obvious candidate as I was working in RStudio (I seem to have developed a sick-in-the-stomach allergic reaction to Jupyter noteobooks, Python, VS Code and Javascript — they really are physically revolting / nausea inducing to me — after a work burn out over the last 9 months of last year).

I use code just a matter of course for all sorts of things, pretty much every day, and also use it recreationally, so R has provided a handy escape route for my code related urges (maybe I should pick up the opportunity to learn something new? The issue is, I tend to be task focussed when it comes to my personal learning, so I’d need to use a language that somehow made sense for a practical thing I want to achieve…)

Anyway, the rally route thing quickly turned into a curiosity driven learning journey: how could I render a raster in a ggplot, could I overlay tiles on a 3D rendered map:

Could I generate a ridge plot?

Ridge plot

Could I buffer a route and use it to crop a 3D model?

Could we convert an image to an elevation raster?

And so on..

When poking around looking for ideas about how to characterise how twisty or turny a route was, I stumbled across sinuosity as a metric, and from that idea quickly discovered a range of R packages that implements tools to characterise animal movement trajectories which we can easily apply to rally stage routes.

Enriching maps with data pulled in from OpenStreetMap also suggests how we might be able to use generate maps that might be useful in event planning (access roads, car parks, viewpoints, etc); and casting routes onto road networks (graph representations of road networks; think osmnx in Python, for example) made me wonder if I’d be able to generate road books and tulip maps (answer: not yet, at least…).

I’ve written my learning journey from the last 20 days or so up at RallyDataJunkie: Visualising Rally Stages; the original repo is here. A summary of topics is in the previous blog post: Visualising Rally Route Stages (with help from rayshader and some ecologists…).

Reflecting on what I’ve ended up with, the structure is partly reflective of the journey I followed, but it’s also a bit revisionist. The original motivation was the chapter on the rendering 3D stage maps; to do this I needed to get a sense of what I could do with 2D rayshader maps first (the 3D plot is just a change in the plot command from plot_map() to plot_3d()), and to do that properly I had to get better at working with route files and elevation matrices. Within the earlier chapters, I do try to follow the route I took learning about each topic, rather then presenting things in the order an academic treatment or traditional teaching route my follow: the point of the resource is not to “teach” linear geo hacking in a formal way, it’s a report of how I learned it, with some backdropped “really useful to know this” pointers added back to earlier stages as I realised I needed them for later things.

Something else you may note about the individual chapters is that there are chunks of repetition of code from earlier on: this is deliberate. The book is a personal reference manual for me, so when I refer back to it for how to do something in the future, there’s enough to get going (hopefully!) without having to keep referring explicitly to too many early chapters.

Another observation: I see this sort of personal learning as availing myself of (powerful) ideas or techniques that are out there that other people have already figured out, ideas or tools or techniques that can help me do a particular thing that I want to do, or make sense of a particular thing that I can’t get my head round (i.e. that help me (help myself) understand the how or the why of a particular thing). I don’t want to be taught. I want enough that I can use and learn from. In my personal learning journey, I’ll come to see why some things that were really handy or useful to help me get started may not be the best way of doing something as I get more skilled, but the more advanced idea would have hindered my learning journey if it had been forced on me. (When I see a new function with dozens of parameters, I stirp it down to what I think is all I need to get it to work, then start to perhaps add parameters back in…)

As teachers, we are often sort of experts, and follow a teaching line based on our expert knowledge, and what we know is good for folk to know foundationally, or that follows a canonical textbook line. But as a curiosity driven personal learner, I chase all manner of desire lines, sometimes having to go around an obstacle I can yet figure out, sometimes having to retrace my steps, sometimes having to go back to the beginning to collect something I didn’t realise I’d actually need.

I don’t care about the canon or the curriculum. I want to know how, then I want to know why, and at some point I may come to understand “oh yeah, it would have been really handy to to have known that at the start”. But whilst teaching is often about making sure everyone is prepared at each step for the step that comes next, learning for me is about heading out into the unknown and picking up stuff that’s useful as I find I need it. And that includes picking up on the theory.

For example, Finding the Racing Line collates a set of very mathematical references around finding optimal racing lines that I’ll perhaps pick into for nudges and examples and blind copying without understanding at times if it helps once I start to try to get my head round the lines rally drivers take round corners. Then I’ll go back to the pictures and equations and try to make sense of it once I’ve got to a position where things maybe work (eg visualised possible routes round a corner) but can I now figure out why and how, and can I make them work better. It may take years to understand the papers, if ever (I’ve been reading Racecar Engineering magazine for 15 years and most of it still doesn’t make much sense to me…), but I’ll pick the bits that look useful, and use the bits I can, and maybe go away to learn a bit more about something else that helps me then use a bit more of the papers, and so on. But doing a maths course, or a physics course wouldn’t help, becuase the teaching line would probably not be my curiosity driven learning line.

For me, playful curiosity is the driver that allows you stick at a problem till you figure it out — but why doesn’t it work? — or at least get into a frame of mind where you can just ignore it (for now) or park it until you figure something else out, or whatever… I’m not sure how the play relates to curiosity, or curiosity to play, but together they allow you to be creative and give you the persistence you need to figure stuff out enough to get stuff done…

Visualising Rally Route Stages (with help from rayshader and some ecologists…)

Inspired by some 3D map views generated using the rayshader and rgl R packages, I wondered how easy it would be to render some 3D maps of rally stages.

It didn’t take too long to get a quick example up and running but then I started wondering what else I could do with route and elevation data. And it turns out, quite a lot.

The result of my tinkerings to date is at rallydatajunkie.com/visualising-rally-stages. It concentrates soley on a "static analysis" of rally routes: no results, no telemetry, just the route.

Along the way, it covers the following topics:

  • using R spatial (sp) and simple features (sf) packages to represent routes;
  • using the leaflet, mapview and ggplot2 packages to render routes;
  • annotating and splitting routes / linestrings;
  • downloading elevation rasters using elevatr;
  • using the raster package to work with elevation rasters;
  • a review of map projections;
  • exploring various ways of rendering rasters and annotating them with derived terrain features;
  • rendering elevation rasters in 2D using rayshader;
  • an aside on converting images to elevation rasters;
  • rendering and cropping elevation rasters in 3D using rayshader;
  • rendering shadows for particular datetimes at specific locations (suncalc);
  • stage route analysis: using animal movement trajectory analysis tools (trajr, amt, rLFT) to characterise stage routes;
  • stage elevation visualisation and analysis (including elevation analysis using slopes);
  • adding OpenStreetMap data inclduing highways and buildings to maps (osmdata);
  • steps towards creating a road book / tulip map using by mapping stage routes onto OSM road networks (sfnetworks, dodgr).

Along the way, I had to learn various knitr tricks, eg for rendering images, HTML and movies in the output document.

The book itself was written uisng Rmd and then published via bookdown and Github Pages. The source repo is on Github at RallyDataJunkie/visualising-rally-stages.

Tracking Objects in Physics Experiment Videos

Via the Twitterz, @voiceofrally linked to a blog post on sub-zero effects on rally cars relating to the upcoming Arctic Rally. Poking around the blog — wrcwings.tech — I came across a fascinating post on periodicity in wake turbulence, as seen in WRC rally car dust trails (Dust flow visualization images from Rally Portugal).

Hmmm.. I don’t need any more side projects, particularly related to things I know nothing about, like video image analysis, but I wondered if there are any tools out that that could do point tracking in video feeds of such noisy points…

I’m not sure if you can or not because I got sidetracked from my search almost immediately by Tracker, a seemingly longlived (i.e. several years old and still apparently maintained) Java application for video analysis and physics modeling.

In its basic form, you can import a video a and step through the frames, manually marking tracked points along the way. A coordinate system frame of reference can be defined, and data points grabbed accordingly. A range of data analysis tools let you analyse and chart the data you have collected as required.

So far, so brilliant. This allows learners to engage in the measurement and analysis part of the experimental process in a "digital humanities" style workflow. (Apps to support the coding of observational / behaviour data from video have been around for ages; for a recent example, see something like BORIS (Behavioral Observation Research Interactive Software).)

But manual coding can be time consuming after a while, and once you’ve started to develop a tacit / visceral feel for the process, what it involves, what sorts of mistakes / errors you can make, and what those mistakes look like in the data, then you can move on to automating parts of the data collection, perhaps in otehr situations, and spend more time on the analysis part of the experimental process.

And it seems like Tracker can help there too…

If I was in a position of having to be a home educator, I think this tool might be a good one to have in the physics toolbox…

…along with some videos from the maestro of physics education at all educational levels, Richard Feynman…

(Shorter clips from the Feynman video can also be found on Youtube, but I fail to see how anyone with even the shortest attention span can’t help but remain engaged by Feynman’s even faster flitting beween curiousity driven "and then I wondered…" observations…)

Finding the Racing Line

Over the last couple of weeks, I’ve been tinkering with various ways of visualising rally stage route data downloaded as KML files. The nature of the routes is such that the linestring can often describe quite a ragged route, made up of it is as a series of concatenated straight line segments.

Whilst looking for ways of smoothing routes — a recommended approach appears to be to use a Savitzky-Golay smoothing filter (?!) — I stumbled across various papers and code repositories relating to racing line optimisation.

One handy repo, TUMFTM/laptime-simulation, included a Python code utility for smoothing a circuit given the track centerline retrieved from OpenStreetMap. If you give it a non-closed route as the input linestring, it currently closes it with a staight line connecting the two ends, but it would be useful to try to tweak it to work with non-closed routes (I opened a related issue).

For a general review of approaches to lap time simulation, see Lap time simulation: Comparison of steady state, quasi-static and transient racing car cornering strategies, Blake Siegler, Andrew Deakin & David Crolla, University of Leeds, SAE TECHNICAL paper series, 2000-01-3563. See also Blake Siegler’s PhD thesis, 2002, from the University of Leeds, Lap Time Simulation for Racing Car Design.

Whilst many of the raceline optimisation papers are a bit beyond me (?!), skimming through them and looking at the pictures may provide further ideas about what sorts of thing may be of interest in terms of rally stage route profiles. Where papers have related code repositories, this could also serve as a useful "for real" tutorial guide in how to convert differential equations as described in a paper into differential equations modeled in code.

I’ve added various papers to my reading list, and for want of anywhere else to keep a record of them, thought I’d post them here.

But first, an introduction to corners….

And to racing lines…

For a more detailed look at smoothing trajectories and finding racing lines, albeit in circuit racing, see for example:

It would be interesting to know how circuit racing lines and rally racing lines might differ in terms of optimisation strategy and physics models.

For example, here’s a take on rally racing lines:

And here’s a tutorial on some of the maths (along with a related blog post and PDF):

The associated article also refers to a couple of PhD theses:

Readings wise, this looks like it could be relevant to that: Minimum time vs maximum exit velocity path optimization during cornering, Velenis, E. and Tsiotras, P., 2005 In 2005 IEEE international symposium on industrial electronics (pp. 355-360).

Surface type and surface evolution (i.e. road order effect) as well as weather may also have a significant effect. For example:

Other rally related papers include:

For general techniques for analysing race car open telemetry data:

PS in passing, an interesting looking blog on WRC aero: wrcwings.tech.

PPS Although slightly off topic, fascinating nonetheless, from the Honda R&D website, register and check out their F1 technical review https://www.hondarandd.jp/summary.php?sid=23&lang=en

Student forum tech support…

“I have a problem, it’s all broken, can you fix it?”
“Possibly, if you tell us what the problem is…”

… doesn’t explain problem; tease info out of them, like blood out of a stone, over three or four replies…

Try these steps…
It still doesn’t work…
Did you follow the steps?
Not all of them…
That’s why it doesn’t work. Try again.

It doesn’t work…
Did you follow the steps?
Yes.

Really?
No. I did some of them before so I didn’t do them all again.

That’s why it doesn’t work. Try again:
Do them all, in the order we suggest…

…tumbleweed…

For some reason it seems to be working now, LOL…

(Re)Discovering Written Down Legends and Tales of the Isle of Wight

One of the things I’d been hoping to do last year was learn a few Island folklore tales for telling at Island Storytellers sessions. The Thing put paid to those events, of course, but as a sort of new year resolultion, I’ve started digging.

There are a few well worn island tales that appear in pretty much every “tales of the Wight” collection, however it’s themed (smugglers, ghosts, legends, folklore, wrecks, etc) and I guess tales that people still tell within families, so to not just rehash every other story, I figure I need a new way in to some of them.

So I’ve started trying to work up a pattern that takes a place, a time, either a bit of law or a bit of lore, and one or more events as a basis for “researching” a story, from which I can generate:

a) the simple telling, which in many cases may appear on the surface to be a rehash of all the other tellings of the same story;

b) a deeper layer that colours each bit of the story for me and provides more hooks for how to remember it.

Using the place is important because it means I can start to anchor things in a memory palace based on the island. Using the data also provides an opportunity to hook things in the memory palace in temporal layer that allows stories in the same time period to colour and link to each other, as well as stories in the same place to colour the place over time. At some point, I daresay characters may also become pieces in the memory palace.

As far as digging around the stories goes, I’ve started looking for primary and old-secondary resources. Primary in the form of original statutes, places and photos (I intend to visit each location as I pull the pieces together to help situate the story properly), court reports (if I can find them!) etc. And old-secondary sources in the form of old books that tell the now familiar, perhaps even then familiar, tales but from the different historical context of the time of writing.

So for example, there’s a wealth of old tourist guides to the Island, going back a couple of hundred years or so, including the following, which can all be found via the Internet Archive or Google Books:

  • The Isle of Wight: its towns, antiquities, and objects of interest, 1915?
  • Legends and Lays of the Wight, Percy Stone, 1912
  • The Undercliff Of The Isle Of Wight Past And Present, Whitehead, John L. 1911
  • Isle of Wight, Moncrieff, A. R. Hope & Cooper, A. Heaton, 1908
  • Steephill Castle, Ventnor, Isle of Wight, the residence of John Morgan Richards, Esq.; a handbook and a history, Marsh, John, 1907
  • A Driving Tour in the Isle of Wight: With Various Legends and Anecdotes, Hubert Garle, 1905
  • The Isle of Wight, George Clinch, 1904 (2nd edition 1921)
  • The New Forest and the Isle of Wight, Cornish, C. J., 1903
  • A pictorial and descriptive guide to the Isle of Wight, Ward, Lock and Company, ltd, 1900
  • The Isle of Wight, Ralph Darlington 1898
  • Letters, archaeological and historical relating to the Isle of Wight, Edward Boucher James, 1896
  • Fenwick’s new and original, poetical, historical, & descriptive guide to the Isle of Wight, George Fenwick, 1885
  • A visit to the Isle of Wight by two wights, Bridge, John, 1884
  • Jenkinson’s practical guide to the Isle of Wight, Henry Irwin Jenkinson, 1876
  • Briddon’s Illustrated Handbook to the Isle of Wight, G. Harvey Betts, 1875
  • Nelson’s Handbook to the Isle of Wight: Its History, Topography, and Antiquities, William Henry Davenport Adams, 1873
  • The tourist’s picturesque guide to the Isle of Wight, George Shaw, 1873
  • Mason’s new handy guide to the Isle of Wight, James Mason, 1872
  • Black’s Picturesque Guide to the Isle of Wight, 1871
  • The Isle of Wight, James Redding Ware, 1871
  • Methodism in the Isle of Wight: its origin and progress down to the present times, Dyson, John B 1865
  • The Isle of Wight, a guide, Edmund Venables Rock, 1860
  • The pleasure visitor’s companion in making the tour of the Isle of Wight, pointing out the best plan for seeing in the shortest time every remarkable object, Brannon, 1857
  • Barber’s picturesque guide to the Isle of Wight, Thomas Barber, 1850
  • Bonchurch, Shanklin & the Undercliff, and their vicinities, Cooke, William B., 1849
  • Glimpses of nature, and objects of interest described during a visit to the Isle of Wight, Loudon, Jane, 1848
  • Owen Gladdon’s wanderings in the Isle of Wight, Old Humphrey, 1846
  • A topographical and historical guide to the Isle of Wight, W.C.F.G. Sheridan, 1840
  • Vectis scenery : being a series of original and select views, exhibiting picturesque beauties of the Isle of Wight, with ample descriptive and explanatory letter-press, Brannon, George, 1840
  • The Isle of Wight: its past and present condition, and future prospects, Robert Mudie 1840
  • The Isle of Wight Tourist, and Companion at Cowes, Philo Vectis, 1830
  • Tales and Legends of the Isle Of Wight, Abraham Elder, 1839
  • The beauties of the Isle of Wight, 1826
  • A historical and picturesque guide to the Isle of Wight, John Bullar, 1825
  • A companion to the Isle of Wight; comprising the history of the island, and the description of its local scenery, as well as all objects of curiosity, Albin, John, 1823
  • The delineator; or, A description of the Isle of Wight, James Clarke, 1822
  • A journey from London to the Isle of Wight, Pennant, Thomas, 1801
  • A Tour to the Isle of Wight (two volumes), Charles Tomkins, 1796
  • The history of the Isle of Wight; military, ecclesiastical, civil, & natural: to which is added a view of its agriculture, Warner, Richard, 1795
  • Tour of the Isle of Wight (two volumes), Hassell, John 1790
  • The History of the Isle of Wight, Richard Worsley, 1781 ? 1785

Many of the above recount the same old, same old stories; but from a quick skim, there is often a slightly different emphasis or bit of colourful interpretation.

But the tours also include occasional new stories, and illustrations and fragments of primary material or commentary, and/or references to the same (which will hopefully give me new ratholes to chase down:-).

Many of them also appear to have a fondness for anecdotes about the weather, architecture, landscape, and people encountered, so I’m hopeful of finding some new to me stories in there too…

…such as why there was an Act of James I posted in the entrance to Godshill Church “which enacts that every female who unfortunately intrudes on the parish a second illegitimate child shall be liable to imprisonment and
hard labour in Bridewell for six months”…

Insecure, But Anyway… RPi Web Terminal

A simple http web terminal offering ssh access into a Raspberry Pi…

# In RPi terminal
sudo apt install libffi-dev
pip install webssh

# Run with:
wssh  --fbidhttp=False --port=8000

Then in browser, go to raspberrypi.local:8000 and log in to raspberrypi.local host with defult credentials (user pi, password raspberry), or whatever credentials you have set…

Insecure as anything, but a quick way to get ssh terminal access if you don’t have another terminal handy.

Next obvious steps would be to try to run the service in the background and ideally run it as a service. The security should probably also be tightened up.

Note that another alternative is to run a Jupyter server, which will provide terminal access and some simple auth on the front end, though you’d be limited to running with the permissions associated with the notebook user.

[Ah, this looks like it has steps for getting a service defined, as well as creating a local SSL certificate: https://blog.51sec.org/2020/07/python-development-installation-on.html ]

PS see also webmin: https://sbcguides.com/install-webmin-on-raspberry-pi/ or https://thedreamingdad.com/install-webmin-raspberry-pi/ (although this takes up 300MB+ of space… )

That said, for webmin, Chrome will tell you to f***k off and won’t let you in because the SSL certs will be off… Because Google knows best and Google decides what mortals can and can’t see; and as with everyone else who works in ‘pootahs and IT, it’s not in their interest for folk to have an easy way in to accessing compute via anything other than regulated insitutional or monetised services. I f****g hate people who block access to compute more than I f****g hate computers.)

Running lOCL

So I think have the bare bones of a lOCL (local Open Computing Lab) thing’n’workflow running…

I’m also changing the name… to VOCL — Virtual Open Computing Lab … which is an example of a VCL, Virtual Computing Lab, that runs VCEs, Virtual Computing Environments. I think…

If you are Windows, Linux, Mac or a 32 bit Raspberry Pi, you should be able to do the following:

Next, we will install a universal browser based management tool, portainer:

  • install portainer:
    • on Mac/Linux/RPi, run: docker run -d -p 80:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce
    • on Windows, the start up screen suggests docker run -d -p 80:8000 -p 9000:9000 --name=portainer --restart=always -v \\.\pipe\docker_engine:\\.\pipe\docker_engine portainer/portainer-ce may be the way to go?

On my to do list is to customise portainer a bit and call it something lOCL.

On first run, portainer will prompt you for an admin password (at least 8 characters).

You’ll then have to connect to a Docker Engine. Let’s use the local one we’re actually running the application with…

When you’re connected, select to use that local Docker Engine:

Once you’re in, grab the feed of lOCL containers: <s>https://raw.githubusercontent.com/ouseful-demos/templates/master/ou-templates.json</s> (I’ll be changing that URL sometime soon…: NOW IN OpenComputingLab/locl-templates Github repo) and use it to feed the portainer templates listing:

From the App Templates, you should now be able to see a feed of examaple containers:

The [desktop only] containers can only be run on desktop (amd64) processors, but the other should run on a desktop computer or on a Raspberry Pi using docker on a 32 bit Rasbperry Pi operating system.

Access the container from the Containers page:

By default, when you launch a container, it is opened onto the domain 0.0.0.0. This can be changed to the actual required domain via the Endpoints configuration page. For example, my Raspberry Pi appears on raspberrypi.local, so if I’m running portainer against that local Docker endpoint, I can configure the path as follows:

>I should be able to generate Docker images for the 64 bit RPi O/S too, but need to get a new SD card… Feel free to chip in to help pay for bits and bobs — SD cards, cables, server hosting, an RPi 8GB and case, etc — or a quick virtual coffee along the way…

The magic that allows containers to be downloaded to Raspberry Pi devices or desktop machines is based on:

  • Docker cross-builds (buildx), which allow you to build containers targeted to different processors;
  • Docker manifest lists that let you create an index of images targeted to different processors and associate them with a single "virtual" image. You can then docker pull X and depending on the hardware you’re running on, the appropriate image will be pulled down.

For more on cross built containers and multiple architecture support, see Multi-Platform Docker Builds. This describes the use of manifest lists which let us pull down architecture appropriate images from the same Docker image name. See also Docker Multi-Architecture Images: Let docker figure the correct image to pull for you.

To cross-build the images, and automate the push to Docker Hub, along with an appropriate manifest list, I used a Github Action workflow using the recipe decribed here: Shipping containers to any platforms: multi-architectures Docker builds.

Here’s a quick summary of the images so far; generally, they either run just on desktop machines (specifically, these are amd64 images, but I think that’s the default for Docker images anyway? At least until folk start buying the new M1 Macs.:

  • Jupyter notebook (oulocl/vce-jupyter): a notebook server based on andresvidal/jupyter-armv7l because it worked on RPi; this image runs on desktop and RPi computers. I guess I can now start iterating on it to make a solid base Jupyter server image. The image also bundles pandas, matplotlib, numpy, scipy and sklearn. These seem to take forever to build using buildx so I built wheels natively on an RPi and added them to the repo so the packages can be installed directly from the wheels. Pyhton wheels are named according to a convention which bakes in things like the Python version and processor architecture that the wheel is compiled for.

  • the OpenRefine container should run absolutely everywhere: it was built using support for a wide range of processor architectures;

  • the TM351 VCE image is the one we shipped to TM351 students in October; desktop machines only at the moment…

  • the TM129 Robotics image is the one we are starting to ship to TM129 students right now; it needs a rebuild because it’s a bit bloated, but I’m wary of doing that with students about to start; hopefully I’ll have a cleaner build for the February start;

  • the TM129 POC image is a test image to try to get the TM129 stuff running on an RPi; it seems to, but the container is full of all sorts of crap as I tried to get it to build the first time. I should now try to build a cleaner image, but I should really refactor the packages that bundle the TM129 software first because they distribute the installation weight and difficulty in the wrong way.

  • the Jupyter Postgres stack is a simple Docker Compose proof of concept that runs a Jupyter server in one container and a PostgreSQL server in a second, linked container. This is perhaps the best way to actually distribute the TM351 environment, rather than the monolithic bundle. At the moment, the Jupyter environment is way short of the TM351 environment in terms of installed Python packages etc., and the Postgres database is unseeded.

  • TM351 also runs a Mongo database, but there are no recent or supported 32 bit Mongo databases any more so that will have to wait till I get a 64 bit O/S running on my RPi. A test demo with an old/legacy 32 bit Mongo image did work okay in a docker-compose portainer stack, and I could talk to it from the Jupyter notebook. It’s a bit of a pain because it means we won’t be able to have the same image running on 32 and 64 bit RPis. And TM351 requires a relatively recent version of Mongo (old versions lack some essentially functionality…).