TO DO – Automated Production of a Screencast Showing Evolution of a Web Page?

Noting @edsu’s Web histories approach describing Richard Rogers’  Doing Web history with the Internet Archive: screencast documentaries, I wonder how hard it would be to write a simple script that automates the collection of screenshots from a web page timeline in the Wayback Machine and stitches them together in a movie?

It’s not hard to find code fragments describing how to turn a series of image files into a video (for example, Create a video in Python from images or Combine images into a video with Python 3 and OpenCv 3) and you can grab a screenshot from a web page using various web testing frameworks (eg Capturing screenshots of website with Python or Grabbing Screenshots of folium Produced Choropleth Leaflet Maps from Python Code Using Selenium).

So, a half hour hack? But I don’t have half an hour right now:-(

PS Possibly handy related tool for downloading stuff in bulk from Wayback Machine, this Wayback Machine Downloader script.

PPS Hmm… should be easy enough to take a similar approach to creating “Wikipedia Journeys” – grab a random Wikipedia page, snapshot it, follow a random link from within the page subject matter, screenshot that, etc. To simplify matters there, there’s the Python wikipedia package.

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...