Fragment: Keeping an Eye on What’s Trackable, Where, and When — Tools for Data Protection Officers as well as the Rest of Us?

Way back when, in the early days of FOI and then “open data”, I naively believed that open data and FOI contact points in organisations would act on as advocates for us outside the organisation getting access to information from inside the organisation. The reality seems to be that as appointees and employees of the organisation, those individuals instead become gatekeepers and often seem to act to find ways of defending the organisation against such requests rather than trying to open the organisation up to them.

When it comes to those appointed to oversee data protection and data privacy issues, I would like to think that whoever is appointed such a role sees it as the role of an advocate for those who work for or come into contact with the organisation, as well as providing an opportunity to aggressively defend the rights of those outside the organisation against the unnecessary and disproportionate collection, processing and sharing of data about them by the organisation. That said, I suspect in many cases the role is more about trying to make sure the company doesn’t get sued under GDPR.

Whilst it would also be nice to think that the data protection person is a geek w/ skillz who can hack their way around an organisation’s systems and websites, poking around to find things that shouldn’t be there and demonstrating how other things can be potentially misused, I suspect they aren’t.

So do we need tools for such officers to keep tabs on their organisation, or perhaps tools to help privacy advocates provide oversight of them?

Poking around traffic generated as I visited the OU VLE a week or two ago, I saw a couple of requests I thought were unnecessary and raised an internal query about them. But it also got me thinking…

The requests appear to be made from tags loaded into the web page using the Google Tag Manager. The Google Tag Manager code appears to be delivered via a gtm.js script with the structure:

{
  "resource": {
    "version": "XXX",
    "macros": [ {} ],
    "tags": [ {} ],
    "predicates": [{}],
    "rules": [ {} ]
  },
  "runtime": [ [], [] ]
}

followed byb a chunk of Javascript code.

The gtm.js file includes rules of the form [["if",1,31],["unless",34,35],["add",51]] that appear to index into the predicates list in the conditional part (logically or’d tests?) and then add a particular tag, which may reference a macro, when the condition is met.

Predicates take the form:

      "function":"_re",
      "arg0":["macro",0],
      "arg1":"^http(s)?:\\\/\\\/(www\\.)?open.ac.uk\\\/?(index.html)?($|\\?)",
      "ignore_case":true
    }

Tags can take a variety of forms, including:

      {
      "function":"__html",
      "once_per_event":true,
      "vtp_html":"\n\u003Cscript type=\"text\/gtmscript\"\u003E!function(b,e,f,g,a,c,d){b.fbq||(a=b.fbq=function(){a.callMethod?a.callMethod.apply(a,arguments):a.queue.push(arguments)},b._fbq||(b._fbq=a),a.push=a,a.loaded=!0,a.version=\"2.0\",a.queue=[],c=e.createElement(f),c.async=!0,c.src=g,d=e.getElementsByTagName(f)[0],d.parentNode.insertBefore(c,d))}(window,document,\"script\",\"https:\/\/connect.facebook.net\/en_US\/fbevents.js\");fbq(\"init\",\"870490019710405\");fbq(\"track\",\"PageView\");\u003C\/script\u003E\n\u003Cnoscript\u003E\n\u003Cimg height=\"1\" width=\"1\" src=\"https:\/\/www.facebook.com\/tr?id=870490019710405\u0026amp;ev=PageView\n\u0026amp;noscript=1\"\u003E\n\u003C\/noscript\u003E\n\n\n",
      "vtp_supportDocumentWrite":false,
      "vtp_enableIframeMode":false,
      "vtp_enableEditJsMacroBehavior":false,
      "tag_id":51
    }

And macros take the form:

{
      "function":"__gas",
      "vtp_cookieDomain":"auto",
      "vtp_doubleClick":false,
      "vtp_setTrackerName":false,
      "vtp_useDebugVersion":false,
      "vtp_useHashAutoLink":false,
      "vtp_decorateFormsAutoLink":false,
      "vtp_enableLinkId":false,
      "vtp_enableEcommerce":false,
      "vtp_trackingId":"UA-4391747-17",
      "vtp_enableRecaptchaOption":false,
      "vtp_enableUaRlsa":false,
      "vtp_enableUseInternalVersion":false
    }

So what I’m wondering is: is there an offline, static analyser for gtm.js scripts that would allow someone to point to a website form a which a gtm.js script be downloaded and then lets them generate human readable reports that:

  • identify in general which trackers are loaded by which rules on which events with what arguments; and
  • identify which trackers are loaded by which rules on which events with what arguments for a specific URL.

This would then allow a university data protection officer, for example, or a student, to provide a URL, such as a URL into the VLE, and get a simple, statically generated report back that shows what trackers are loaded when visiting that environment.

Which is simpler than running Ghostery or opening developer tools in a wide open by default browser like Chrome, rather than the rather more privacy defending Firefox, for example, and searching the network logs for incriminating evidence.

Google Tag Manager has been around for some time, and I’m assuming that organisational web folk have read each line of code in the gtm.js they load into user’s browsers to make sure that it’s not doing anything untoward. (That everyone else uses it is no excuse, unless perhaps it meets some sort of international software quality standard that folk can just embed it without looking at it.)

So I’m wondering:

  • is there a line by line annotated version of the code at the bottom of the gtm.js script anywhere?
  • are there line by line examples out there of a simple gtm.js script and how to read it / analyse it (so eg walking through: this rule says this, which adds that tag, which is then parsed this way?)
  • are there static gtm.js analysers out there that generate the static reports suggested above and that allow folk to analyse arbitrary gtm.js scripts that are loaded into their browser in many of the sites they visit?

Running R Projects in MyBinder – Dockerfile Creation With Holepunch

For those who don’t know it, MyBinder is a reproducible research automation tool that will take the contents of a Github repository, build a Docker container based on requirements files found inside the repo, and then present the user with a temporary, running container that can serve a Jupyter notebook, JupyterLab or RStudio environment to the user. All at the click of a button.

Although the primary, default, UI is the original Jupyter notebook interface, it is also possible to open a MyBinder environment into JupyterLab or, if the R packaging is install, RStudio.

For example, using the demo https://github.com/binder-examples/r repository, which contains a simple base R environment, with RStudio installed, we can use my Binder to launch RStudio running over the contents of that repository:

When we launch the binderised repo, we get — RStudio in the browser:

Part of the Binder magic is to install a set of required packages into the container, along with “content” documents (Jupyter notebooks, for example, or Rmd files), based on requirements identified in the repo. The build process is managed using a tool called repo2docker, and the way requirements / config files need to be defined can be found here.

To make building requirements files easier for R projects, the rather wonderful holepunch package will automatically parse the contents of an R project looking for package dependencies, and will then create a DESCRIPTION metadata file itemising the found R package dependencies. (holepunch can also be used to create install.R files.) Alongside it, a Dockerfile is created that references the DESCRIPTION file and allows Binderhub to build the container based on the project’s requirements.

For an example of how holepunch can be used in support of academic publishing, see this repo — rgayler/scorecal_CSCC_2019 — which contains the source documents for a recent presentation by Ross Gayler to the Credit Scoring & Credit Control XVI Conference. This repo contains the Rmd document required to generate the presentation PDF (via knitr) and Binder build files created by holepunch.

Clicking the repo’s MyBinder  button takes you, after a moment or two, to a running instance of RStudio, within which you can open, and edit, the presentation .Rmd file and knitr it to produce a presentation PDF.

In this particular case, the repository is also associated with a Zenodo DOI.

As well as launching Binderised repositories from the Github (or other repository) URL, MyBinder can also launch a container from a Zenodo DOI reference.

The screenshot actually uses the incorrect DOI…

For example, https://mybinder.org/v2/zenodo/10.5281/zenodo.3402938/?urlpath=rstudio.

Looking Up R / CRAN Package Maintainers With an ac.uk Affiliation

Trying to find an examiner for a particular PhD thesis relating to a rather interesting datastructure for wrangling messy datatables, I wondered whether we might find a likely suspect amongst the R package maintainer community.

We can get a list of R package maintainers here and a list of package name / short descriptions here.

FWIW, here’s the code fragment:

import pandas as pd

maintainers = pd.read_html('https://cran.r-project.org/web/checks/check_summary_by_maintainer.html')[0]
maintainers_email = maintainers.dropna(subset=[0])
maintainers_email[maintainers_email[0].str.contains('.ac.uk')][[0,1]]

packages = pd.read_html('https://cran.r-project.org/web/packages/available_packages_by_name.html')[0]
packages

maintainers_email_acuk = maintainers_email[maintainers_email[0].str.contains('.ac.uk')][[0,1]]
maintainers_email_acuk.merge(packages,left_on=1,right_on=0)

See also: What Do you Mean You Write Code EVERY DAY?, examples of which I’ve just turned into a new blog category: WDYMYWCED.

Trying to Get Hold of UK Air Quality Data Via a Python API

It’s that time of year again for prepping the end of course assessment material for our TM351 Data Management and Analysis course (not that I typically have much to do with preparing such things…!).

The end of course assessment is typically framed as a data project that requires students linking several datasets and finding interesting to say about them. This final project is set up via a continuous assessment activity that introduces one of the datasets and gets students started working with it – exploring what the dataset looks like, getting it into a database, generating some basis charts from it and starting to formulate some questions around it.

As with many of the data activities, my preference is for ones that makes use of national datasets with local relevance. This can add variety — if students compare data from three local authorities selected from across the UK, there’s a good chance they might select different locations — and it also provides them with the opportunity to carry out a data investigations for their local area using data that they may not have been aware even existed…

This year’s topic is likely to be bootstrapped around air quality data. Sites such as the London Air Quality network make data available for London boroughs, but it’d nice to be able to offer data fro a more national scope.

Looking at Defra’s UK Air website, data does seem to be available for sites across the UK, but the download form is horrible, hugely restrictive on the amount of data you can download, and not obviously open in the creation of URLs that can be machine generated and used to programmatically download data.

Which is not ideal…

However, it does seem that an API exists for R users in the form of David Carslaw’s openair package. So how does that work, then???

Poking around in the code, it seems that sampling site metadata as well air quality sample data is available via .Rdata packages.

Hmm… a bit more poking in the code turns up some URL patterns, and a quick search turns up for Python packages that can read .Rdata packages without the need to install R turns up pyreadr.

So here’s a quick first attempt at a Python downloader for UK air quality data:

The location IDs look a bit ad hoc/made up, but there is lat/long data, so it should be easy enough to call something like postcodes.io to find some rather more standardised administrative codes.

With a couple of tiny functions, it should be easy enough to grab data from the metadata dataframe to generate a simple ipywidgets powered UI that lets you select a local authority by name, perhaps pre-filtered to LAs within a particular selected region, and download just the data for that authority.

But that, as they say, is an exercise left for the reader…

Quick Way in to Hacking Legacy OU Course Materials Using Markdown

By some arcane process, OU course materials authored typically in MS Word are converted to an XML format (OU-XML) and then rendered variously to HTML for the Moodle website, ebook formats, and perhaps PDF (we don’t want to make it too easy for students to print of the materials…).

An internal project that ran for a couple of years (maybe a bit more) looking at more direct authoring workflows was shelved earlier this year. (I was banned from blogging about it whilst it was under development, so I’m afraid I don’t have screen shots to show what it looked like from the time I was given preview access.) As far as I know, the authoring tool was completely distinct from the one developed by the OU’s bastard offspring that is FutureLearn. Nowt like sharing.

One of the things I’m slated to do over the next few months is update, or possibly rewrite, a unit in a first year equivalent module.

My preferred way of authoring for some time has been to keep it simple and just use markdown.

So that’s what I’m probably going to do.

If there’s any griping or sniping that it doesn’t fit the OU workflow, I’ll just run it through pandoc to generate an MS Word docx version and hand that over.

(I’ve been saying *for years* we should have pandoc read/write filters for OU-XML (the most recent notes are here). It would have been a damn site cheaper than the aborted authoring tool project and would have allowed authors to explain a whole range of tools for creating their warez, with pandoc handling the conversion to OU-XML. And yes, I f**king know that some hand cleaning of the OU-XML would almost certainly have been required but we’d have got a far better feeling for what sorts of document structures folk produce if they were allowed to use the tools that suit them. And authors’ shonky mark-up (including my own) *always* needs some fettling anyway: we already know that…)

So… markdown…

If I’m going to revise the current materials, I need to get them out of the current format and into markdown. I’ve previously started looking at an XSLT to convert OU-XML to markdown, eg as described in Fragment – OpenLearn Jupyter Books Remix; a copy of the current-ish XSLT, and some code fragments to grab and convret an example OU-XML document, can be found here.

But today, I thought of an even scruffier and quicker way…

Within the VLE, a single OU-XML source document is rendered across multiple HTML pages, along  with a navigation index:

A single HTML page view (for easier printing) is also available… Hmmm…there are plenty of HTML2markdown converters out there, aren’t there?

#!pip3 install markdownify
from bs4 import BeautifulSoup
from markdownify import markdownify as md

with open('Robotics study week 1 – Introduction_ View as single page.html', 'r') as f:
    # Let's just grab the HTML body...
    tree = BeautifulSoup(f.read(), 'lxml')
    body = tree.body
    txt = md(str(body))
    
with open('week1-mardownify.md','w') as f:
    # There'll still be script tag cruft, videos won't be embedded / linked etc
    # but it's enough to get started with and the diffs should be easy to see...
    f.write(txt)

The output is a bit flakey in parts, but most of the stuff I need is there.  Certainly, there’s more than enough of it in useable form for me to start using as an outline. Indeed, much of the work will be ripping out and replacing the huge chunks of content that are now rather dated.

I can also edit the markdown in a notebook environment using Jupytext, using metadata cells to highlight certain blocks of content with additional structural or semantic metadata, saving the metadata into the markdown document from where it could be processed (I’m not sure how it would turn up if the enhanced markup were converted to docx using pandoc, for example?).

From what I saw of the aborted OpenCreate editor, it used a block/cell style metaphor for creating separate content elements within a page, so it’d also be interesting to compare the jupytext/metadata enhanced markdown, or even the notebook ipynb output format, with the OpenCreate document format / representation to see whether there are similarities in the block level semantic / structural markup.

Intercepting JSON HTTP Responses to Web Browser Page Requests Using MITMProxy

Coming back from a week or so away, the car let us down with a ruptured water hose which sent my confidence / mental state tanking, albeit with the AA managing a  quick fix with some new-to-me water activated tape along the lines of this . (It’s bad enough requiring a call out, but the stress is multiplied when you live on an island!)

My reboot strategy was to have a quick play with data from the weekend’s WRC rally, but when my datagrabber failed, it tanked my mood further and resulted in an 18 hour not-moving / not eating manic coding stretch that I’m still bleary eyed from.

The problem stemmed from a couple of things that interacted enough to confuse me. One was a pandas update that changed the behaviour of the json_normalize function I was using to unpack JSON values, and the other was the behaviour of the WRC server I pull data from (probably in breach of terms and conditions) which erratically kept giving a NULL/404 response to valid requests.

I’m not sure if the server behaviour was a defensive measure against scraping on the part of the publisher or if it’s an issue with the cache service I’m pulling from (certainly, hitting the same URL could give a valid data response, then nothing for the next few hits, then a response again). I tweaked my Python requests scraper code by adding some header info to spoof a browser user-agent, as well as tweaking the request period to make it play a bit nicer, but still the erratic 404s appeared at any ever greater rate.

Loading pages via a browser works okay, with the JSON requests being handled correctly, so I could just scrape the HTML tables that I think the JSON is used to populate (else: why load it?), although I have noted in the past that the JSON data structures have more data fields than are displayed in the WRC live timing HTML tables.

Then I started wonder about how to automate the collection of those requests using a browser automation route, with Selenium handling page selections and something grabbing the data perhaps from the browser devtools har archive (right click on a recorded entry in the devtools network listing to save all of them to a har archive).

The har archive itself is a JSON file, so that’s quite easy to work with, but the Chrome export (I think) is everything, not just filtered requests as in the screenshot above. Firefox seems to let you filter network items and just export filtered ones to a har file, which you can then open as a json file, filtering on the url to identify the request(s) of interest.

In passing, I also note that the har-extractor tool makes previewing life easier in the way it unpacks requests from a har archive into discrete files in a (nested) directory structure.

I also notice just now that the Firefox developer tools also seem to have a websocket sniffer, which could be handy… [UPDATE: I think that may be an extension I installed…]  FWIW, I also started trying to get my head around web sockets in a generic Python context when I was trying to come up with a simple MyBinder client (see here): A Minimal Python Client for MyBinder.

Whilst the Firefox route looked promising, I wasn’t sure how automatable it would be: whilst selenium-py would let me script lots of link clicking in the WRC site, I’m not sure it provides an API to browser dev tools?

I did find one tool I thought looked interesting, selenium-wire, but it seems to only capture headers, not payloads, of requests made from the selenium scripted browser:

#!pip3 install selenium-wire
from seleniumwire import webdriver 

# Create a new instance of the Firefox driver
driver = webdriver.Firefox()

# Go to a WRC live timing page
driver.get('https://www.wrc.com/en/wrc/livetiming/page/4175----.html')

# Access requests via the `requests` attribute
for request in driver.requests:
    if request.response:
        if 'sasCache' in request.path:
            print(
                request.path,
                request.response.status_code,
                request.response.headers,
                request.body,
                '\n'
            )

I can see how that might be handy for capturing the addresses of resources loaded by a page, but I want the actual gzipped JSON data that forms the request content for the resources I’m interested in…

UPDATE: seems like selenium-wire is totally up to the job:

Poking around further, it seems the best approach is to use a proxy that can grab traffic as required. There are lots of partial clues as to what to use out there, many of them referring to browsermob and the BrowserMob Proxy Python client, but no full recipes.

Another proxy that looked a bit easier to use, with a Python base and more powerful in the way you can script it, is mitmproxy (“man-in-the-middle proxy”).

Again, the docs and recipes seem to be a bit scattered, so here’s a complete recipe that worked for me…

Start off by installing the proxy and getting it running. You can do this on the command line with:

pip3 install mitmproxy

mitmdump -w test1
#This will dump intercepted requests into
#    the file: test1

#close with: ctrl-c

On my local install, but not MyBinder?, I can actually run this as a background job from a notebook code cell using cell block magic:

%%script bash --bg
mitmdump -w test1

To stop the background process, we could look up the process number from a code cell and then kill that process by process ID (kill PROCESSID):

#Process numbers 
! ps -e | grep 'mitmdump' | awk '{print $1 " " $4}']

or let the (Linux) machine do it…

#Or to kill eg
!kill $(ps -e | grep 'mitmdump' | awk '{print $1}' )

In a notebook, we can then launch a (optionally, headless) selenium controlled browser:

from selenium import webdriver

PROXY = "localhost:8080" # IP:PORT or HOST:PORT

chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % PROXY)
chrome_options.add_argument("--headless") 

chrome = webdriver.Chrome(chrome_options=chrome_options)
chrome.get("https://www.wrc.com/en/wrc/livetiming/page/4175----.html")

chrome.close()

We can view the result using the mitmweb browser app (from the command line: mitmweb). (Note that I think this also runs the proxy…? Again, ctrl-c to kill it.)

We can load the file we collected within the web app and then filter the requests to ones of interest:

Selecting a request shows us the contents of the request response, which is to say: the JSON data I’m after…

So this is starting to look promising…

…even more so when we realise we can filter a collected set of resources using the mitmdump command, for example with a construction of the form:

mitmdump -nr test1 -w test4 "~u .*sasCacheApi.*"

which will filter the archived collection in test1 using any desired filters (eg. "~u .*sasCacheApi.*") to create the filtered set in test4.

We can also add filters when running mitmdump to collect requests, For example:

mitmdump -w test5 "~u .*sasCacheApi.*"

will only capture and dump intercepted requests from locations with the desired address pattern into the file test5.

I’ve now got a pattern that could be used to scrape lots of JSON files:

  • set up the mitmproxy with appropriate filters to collect just the files I want,
  • script selenium to load the desired web page and click through various bits of it to make sure all the resources I want are loaded *(not addressed here; I need to do a post on scraping with Selenium-py; for now, here’s an example of using it to [do some repetitive work](https://blog.ouseful.info/2019/01/21/bulk-notebook-uploads-to-nbgallery-using-selenium/)…)*,
  • and then… then what? Parse the resource collection, that’s what…

Here’s an initial fragment for how to do that.

First, we can preview the headers for intercepted resources:

from mitmproxy import io
from mitmproxy.net.http.http1.assemble import assemble_request

def response(flow):
    print(assemble_request(flow.request).decode('utf-8'))

with open('test4', "rb") as logfile:
    freader = io.FlowReader(logfile)
    for f in freader.stream():
        response(f)

We can inspect what has been captured by getting the state of a flow object:

f.get_state()

This actually returns a python dict, the keys for which we can easily preview: f.get_state().keys()

Of particular interest is the response:

f.get_state()['response']

We note that the content is compressed / gzipped, so we can uncompress that…

import gzip
text = gzip.decompress(f.get_state()['response']['content'])
text

All that remains now is a tweak to the iteration through the response previewer (the response(flow) function defined above) to unzip and save each of them to a file. For example, something like:

import gzip
def response2(flow):
    fn = flow.get_state()['request']['path'].decode()
    fn = fn.split('=')[1].replace('%2F','_').replace('%3F','_').replace('%3D','_')
    print('Saving file: {}'.format(fn))
    with open('{}.json'.format(fn),'wb') as outfile:
        outfile.write( gzip.decompress(flow.get_state()['response']['content']) )

with open('test4', "rb") as logfile:
    freader = io.FlowReader(logfile)
    for f in freader.stream():
        response2(f)

I’m still not feeling right happy / in control, though… F****g Boris…

PS as to why scrape the data? For generating things like these Stage Charts for WRC Rally Sweden.

Neo4J Graph Database Running in MyBinder

Earlier today, I spotted this rather handy Global Witness repo that includes data ingest and analysis around the UK Persons of Significant Control register using Neo4J.

In part it reminded me of my own early explorations around the PSC register, as well as previous attempts at setting up linked Jupyter notebook and RStudio environments linked to neo4J, such as  Getting Started With the Neo4j Graph Database – Linking Neo4j and Jupyter SciPy Docker Containers Using Docker Compose and this one on Accessing a Neo4j Graph Database Server from RStudio and Jupyter R Notebooks Using Docker Containers.

I’ve also previously explored how to run Postgres server in a Binderised / MyBinder environment — Running a PostgreSQL Server in a MyBinder Container — so it seemed only natural to see if I could launch neo4J in a MyBinder environment too.

Setting things up to work with a Python client is easy enough — see this templated, Binderised repo — binder-neo4j — although at the moment I can’t seem to get the neo4j web UI to work with jupyter-server-proxy.

Now I’m wondering whether I should try to put together a Binderised repo that uses Sam Leon’s Global Witness repo scripts to ingest the PSC data into a Binderised repo to make running graph queries over the PSC data possible in a Mybinder environment…

View demo in MyBinder: https://mybinder.org/v2/gh/psychemedia/binder-neo4j/master?filepath=py%2Fneo4j-demo.py