Category: Anything you want

Robot Journalism in Germany

By chance, I came across a short post by uber-ddj developer Lorenz Matzat (@lorz) on robot journalism over the weekend: Robot journalism: Revving the writing engines. Along with a mention of Narrative Science, it namechecked another company that was new to me: [b]ased in Berlin, Retresco offers a “text engine” that is now used by the German football portal “FussiFreunde”.

A quick scout around brought up this Retresco post on Publishing Automation: An opportunity for profitable online journalism [translated] and their robot journalism pitch, which includes “weekly automatic Game Previews to all amateur and professional football leagues and with the start of the new season for every Game and detailed follow-up reports with analyses and evaluations” [translated], as well as finance and weather reporting.

I asked Lorenz if he was dabbling with such things and he pointed me to AX Semantics (an Aexea GmbH project). It seems their robot football reporting product has been around for getting on for a year (Robot Journalism: Application areas and potential[translated]) or so, which makes me wonder how siloed my reading has been in this area.

Anyway, it seems as if AX Semantics have big dreams. Like heralding Media 4.0: The Future of News Produced by Man and Machine:

The starting point for Media 4.0 is a whole host of data sources. They share structured information such as weather data, sports results, stock prices and trading figures. AX Semantics then sorts this data and filters it. The automated systems inside the software then spot patterns in the information using detection techniques that revolve around rule-based semantic conclusion. By pooling pertinent information, the system automatically pulls together an article. Editors tell the system which layout and text design to use so that the length and structure of the final output matches the required media format – with the right headers, subheaders, the right number and length of paragraphs, etc. Re-enter homo sapiens: journalists carefully craft the information into linguistically appropriate wording and liven things up with their own sugar and spice. Using these methods, the AX Semantics system is currently able to produce texts in 11 languages. The finishing touches are added by the final editor, if necessary livening up the text with extra content, images and diagrams. Finally, the text is proofread and prepared for publication.

A key technology bit is the analysis part: “the software then spot patterns in the information using detection techniques that revolve around rule-based semantic conclusion”. Spotting patterns and events in datasets is an area where automated journalism can help navigate the data beat and highlight things of interest to the journalist (see for example Notes on Robot Churnalism, Part I – Robot Writers for other takes on the robot journalism process). If notable features take the form of possible story points, narrative content can then be generated from them.

To support the process, it seems as if AX Semantics have been working on a markup language: ATML3 (I’m not sure what it stands for? I’d hazard a guess at something like “Automated Text ML” but could be very wrong…) A private beta seems to be in operation around it, but some hints at tooling are starting to appear in the form of ATML3 plugins for the Atom editor.

One to watch, I think…

WordPress Quickstart With Docker

I need a WordPress install to do some automated publishing tests, so had a little look around to see how easy it’d be using docker and Kitematic. Remarkably easy, it turns out, once the gotchas are sorted. So here’s the route in four steps:

1) Create a file called docker-compose.yml in a working directory of your choice, containing the following:

somemysql:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD: example
    
somewordpress:
  image: wordpress
  links:
    - somemysql:mysql
  ports:
    - 8082:80

The port mapping sets the WordPress port 80 to be visible on host at port 8082.

2) Using Kitematic, launch the Kitematic command-line interface (CLI), cd to your working directory and enter:

docker-compose up -d

(The -d flag runs the containers in detached mode – whatever that means?!;-)

3) Find the IP address that Kitematic is running the VM on – on the command line, run:

docker-machine env dev

You’ll see something like export DOCKER_HOST="tcp://192.168.99.100:2376" – the address you want is the “dotted quad” in the middle; here, it’s 192.168.99.100

4) In your browser, go to eg 192.168.99.100:8082 (or whatever values your setup us using) – you should see the WordPress setup screen:

WordPress_›_Installation

Easy:-)

Here’s another way (via this docker tutorial: wordpress):

i) On the command line, get a copy of the MySQL image:

docker pull mysql:latest

ii) Start a MySQL container running:

docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=example -d mysql

iii) Get a WordPress image:

docker pull wordpress:latest

iv) And then get a WordPress container running, linked to the database container:

docker run --name wordpress-instance --link some-mysql:mysql -p 8083:80 -d wordpress

v) As before, lookup the IP address of the docker VM, and then go to port 8083 on that address.

FOI and Communications Data

Last week, the UK Gov announced an Independent Commission on Freedom of Information (written statement) to consider:

  • whether there is an appropriate public interest balance between transparency, accountability and the need for sensitive information to have robust protection
  • whether the operation of the Act adequately recognises the need for a ‘safe space’ for policy development and implementation and frank advice
  • the balance between the need to maintain public access to information, the burden of the Act on public authorities and whether change is needed to moderate that while maintaining public access to information

To the, erm, cynical amongst us, this could be interpreted as the first step of a government trying to make it harder access public information about decision making processes (that is, a step to reduce transparency), though it would be interesting if the commission reported that making more information available proactively available as published public documents, open data and so was an effective route to reducing the burden of FOIA on local authorities.

One thing I’ve been meaning to do for a long time is have a proper look at WhatDoTheyKnow, the MySociety site that mediates FOI requests in a public forum, as well as published FOI disclosure logs, to see what the most popular requests are by sector to see whether FOI requests can be used to identify datasets and other sources of information that are commonly requested and, by extension, should perhaps be made available proactively (for early fumblings, see FOI Signals on Useful Open Data? or The FOI Route to Real (Fake) Open Data via WhatDoTheyKnow, for example).

(Related, I spotted this the other day on the Sunlight Foundation blog: Pilot program will publicize all FOIA responses at select federal agencies: Currently, federal agencies are only required to publicly share released records that are requested three or more times. The new policy, known as “release to one, release to all,” removes this threshold for some agencies and instead requires that any records released to even one requester also be posted publicly online. I’d go further – if the same requests are made repeatedly (eg information about business rates seems to be one such example) the information should be published proactively.)

In a commentary on the FOI Commission, David Higgerson writes (Freedom of Information Faces Its Biggest THreat Yet – Here’s Why):

The government argues it wants to be the most transparent in the world. Noble aim, but the commission proves it’s probably just words. If it really wished to be the most transparent in the world, it would tell civil servants and politicians that their advice, memos, reports, minutes or whatever will most likely be published if someone asks for them – but that names and any references to who they are will be redacted. Then the public could see what those working within Government were thinking, and how decisions were made.

That is, the content should be made available, but the metadata should be redacted. This immediately put me in mind of the Communications Data Bill, which is perhaps likely to resurface anytime soon, that wants to collect communications metadata (who spoke to whom) but doesn’t directly let folk get to peak at the content. (See also: From Communications Data to #midata – with a Mobile Phone Data Example. In passing, I also note that the cavalier attitude of previous governments to passing hasty, ill-thought out legislation in the communications data area at least is hitting home. It seems that the Data Retention and Investigatory Powers Act (DRIPA) 2014 is “inconsistent with European Union law”. Oops…)

Higgerson also writes:

Politicians aren’t stupid. Neither are civil servants. They do, however, often tend to be out of touch. The argument that ‘open data’ makes a government transparent is utter bobbins. Open data helps people to see the data being used to inform decisions, and see the impact of previous decisions. It does not give context, reasons or motives. Open data, in this context, is being used as a way of watering down the public’s right to know.

+1 on that… Transparency comes more from the ability to keep tabs on the decision making process, not just the data. (Some related discussion on that point here: A Quick Look Around the Open Data Landscape.)

Tata F1 Connectivity Innovation Prize, 2015 – Telemetry Dash

I’ve run out of time trying to put together an entry for this year’s first round of the Tata F1 Connectivity Innovation Prize (brief [PDF]), which this year sets the challenge of designing a data dashboard that displays the following information:

prize_tatacommunications_com_Challenge_1_Brief_for_the_F1_Connectivity_Innovation_Prize_pdf

Just taking at the data as presented turns it into somethign of an infographic design exercise (I wonder if anyone will submit entries using infogr.am?!) but the reality is much more that it needs to be a real-time telemetry dashboard.

My original sketch is a just a riff on the data as given:

dash1

(From the steering angle range, I guess straight ahead must be 90 degrees?! Which would put 12 degrees as a stupidly left turn? Whatever… you get the idea!)

Had I had time, I think I’d have extended this to include historical traces of some of the data, eg using something like the highcharts dynamic demo that could stream a recent history of particular values, perhaps taking inspiration too from Making Sense of Squiggly Lines. One thing I did think might be handy in this context were “sample and hold” colour digit or background alerts which would retain a transient change for a second or two – for example, recording that the steering wheel had been given a quick left-right – that could direct attention to the historical trace if the original incident was missed or needed clarification.

The positioning RPM then throttle is based on the idea that the throttle is a request for revs. Putting throttle (racing green for go) and brake (red for stop) next to each other allows control commands to be compared, and putting brake and speed (Mercedes silver/grey – these machines are built for speed) next to each other is based on the idea you brake to retard the vehicle (i.e. slow it down). (I also considered the idea of the speed indicator as a vertical bar close to the gear indicator, but felt that if the bars are rapidly changing, which they are likely to do, it could be quite jarring if vertical bars are going up and down t right angles to each other? What I hope the current view would do is show more of a ratchet effect across all the bars?) The gear indicator helps group these four indicators together. (I think that white background should actually be yellow?) In the event of a gear being lost, the colour of that gear number could fade further in grey towards black. A dot to the right of the scale could also indicate the ideal gear to be in at any particular point.

The tyre display groups the tyres and indicates steering angle as well as tyre temperature colour coded according to a spectrum colour scale. (The rev counter is also colour coded.) The temperature values are also displayed in an grid to allow for easy comparison, and again are match-colour coded. The steering angle is also displayed as a numerical indicator value, and further emphasised by the Mercedes logo (Mercedes are co-sponsoring the competition, I think? That said, I suspect their brand police, if they are anything like the OU’s, may have something to say about tilting the logo though?!) The battery indicator (CC: “Battery icon” by Aldendino) is grouped with the tyres on the grounds that battery and tyres are both resources that need to be managed.

In additional material, I’d possibly also have tried to demo some alerts, such as an overcooked tyre (note the additional annotation that that should have been in the original showing the degrees C unit):

Presentation1

and perhaps also included a note about possible additional channels – hinting at tyre pressure based on the size of each tyre, perhaps, or showing where another grid showing individual tyre pressure might go, or (more likely) assuming a user-interactive display, a push to toggle view, or even a toggling display, that shows tyre pressure or pressure in the same location at different times. There probably also needs to be some sort of indication of brake balance in there too – perhaps a dot that moves around the central grid cross, perhaps connected by a line to the origin of the cross?

The brief also asks for some info about refresh rates – Tata are in the comms business after all… I guess things to take into account are the bandwidth of the telemetry from the car (2 megabits per second looks reasonable?), the width of the data from each sensor, along with their sampling rates (info from ECU specs) and perhaps a bit of psychology (what sorts of refresh rate can folk cope with when numerical digit displays update, for example (e.g. when watching a time code on a movie?). Maybe also check out some bits and pieces on realtime dashboard design) and example race dashboard designs to see what sorts of metaphor or common design styles are likely to be familiar to team staff (and hence not need as much decoding). Looking back at last year’s challenge might also be useful. E.g. the timing screen whose data feed was a focus there used a black background and a palette of white, yellow, green, purple, cyan and red. There are conventions associated with those colours that could perhaps be drawn on here. (Also, using those colours perhaps make sense in that race teams are likely to be familiar with distinguishing those colours and associating meaning with them.)

I’ve never really tried to put a dashboard together… There’s lots to consider, isn’t there?!

A DevOps Approach to Common Environment Educational Software Provisioning and Deployment

In Distributing Software to Students in a BYOD Environment, I briefly reviewed a paper that described a paper that reported on the use of Debian metapackages to support the configuration of Linux VMs for particular courses (each course has its own Debian metapackage that could install all the packages required for that course).

This idea of automating the build of machines comes under the wider banner of DevOps (development and operations). In a university setting, we might view this in several ways:

  • the development of course related software environments during course production, the operational distribution and deployment of software to students, updating and support of the software in use, and maintenance and updating of software between presentations of a course;
  • the development of software environments for use in research, the operation of those environments during the lifetime of a resarch project, and the archiving of those environments;
  • the development and operation of institutional IT services.

In an Educause review from 2014 (Cloud Strategy for Higher Education: Building a Common Solution, EDUCAUSE Center for Analysis and Research (ECAR) Research Bulletin, November, 2014 [link]), a pitch for universities making greater use of cloud services, the authors make the observation that:

In order to make effective use of IaaS [Infrastructure as a Service], an organization has to adopt an automate-first mindset. Instead of approaching servers as individual works of art with artisan application configurations, we must think in terms of service-level automation. From operating system through application deployment, organizations need to realize the ability to automatically instantiate services entirely from source code control repositories.

This is the approach I took from the start when thinking about the TM351 virtual machine, focussing more on trying to identify production, delivery, support and maintenance models that might make sense in a factory production model that should work in a scaleable way not only across presentations of the same course, as well as across different courses, but also across different platforms (students own devices, OU managed cloud hosts, student launched commercial hosts) rather than just building a bespoke, boutique VM for a single course. (I suspect the module team would have preferred my focussing on the latter – getting something that works reliably, has been rigorously tested, and can be delivered to students rather than pfaffing around with increasingly exotic and still-not-really-invented-yet tooling that I don’t really understand to automate production of machines from scratch that still might be a bit flaky!;-)

Anyway, it seems that the folk at Berkeley have been putting together a “Common Scientific Compute Environment for Research and Education” [Clark, D., Culich, A., Hamlin, B., & Lovett, R. (2014). BCE: Berkeley’s Common Scientific Compute Environment for Research and Education, Proceedings of the 13th Python in Science Conference (SciPy 2014).]


The BCE – Berkeley Common Environment – is “a standard reference end-user environment” consisting of a simply skinned Linux desktop running in virtual machine delivered as a Virtualbox appliance that “allows for targeted instructions that can assume all features of BCE are present. BCE also aims to be stable, reliable, and reduce complexity more than it increases it”. The development team adopted a DevOps style approach customised for the purposes of supporting end-user scientific computing, arising from the recognition that they “can’t control the base environment that users will have on their laptop or workstation, nor do we wish to! A useful environment should provide consistency and not depend on or interfere with users’ existing setup”, further “restrict[ing] ourselves to focusing on those tools that we’ve found useful to automate the steps that come before you start doing science. Three main frames of reference were identified:

  • instructional: students could come from all backgrounds and often unlikely to have sys admin skills over and above the ability to use a simple GUI approach to software installation: “The most accessible instructions will only require skills possessed by the broadest number of people. In particular, many potential students are not yet fluent with notions of package management, scripting, or even the basic idea of commandline interfaces. … [W]e wish to set up an isolated, uniform environment in its totality where instructions can provide essentially pixel-identical guides to what the student will see on their own screen.”
  • scientific collaboration: that is, the research context: “It is unreasonable to expect any researcher to develop code along with instructions on how to run that code on any potential environment.” In addition, “[i]t is common to encounter a researcher with three or more Python distributions installed on their machine, and this user will have no idea how to manage their command-line path, or which packages are installed where. … These nascent scientific coders will have at various points had a working system for a particular task, and often arrive at a state in which nothing seems to work.”
  • Central support: “The more broadly a standard environment is adopted across campus, the more familiar it will be to all students”, with obvious benefits when providing training or support based on the common environment.

Whilst it was recognised the personal laptop computers are perhaps the most widely used platform, the team argued that the “environment should not be restricted to personal computers”. Some scientific computing operations are likely to stretch the resources of a personal laptop, so the environment should also be capable of running on other platforms such as hefty workstations or on a scientific computing cloud.

The first consideration was to standardise on an O/S: Linux. Since the majority of users don’t run Linux machines, this required the use of a virtual machine (VM) to host the Linux system, whilst still recognising that “one should assume that any VM solution will not work for some individuals and provide a fallback solution (particularly for instructional environments) on a remote server”.

Another issue that can arise is dealing with mappings between host and guest OS, which vary from system to system – arguing for the utility of an abstraction layer for VM configuration like Vagrant or Packer … . This includes things like portmapping, shared files, enabling control of the display for a GUI vs. enabling network routing for remote operation. These settings may also interact with the way the guest OS is configured.

Reflecting on the “traditional” way of building a computing environment, the authors argued for a more automated approach:

Creating an image or environment is often called provisioning. The way this was done in traditional systems operation was interactively, perhaps using a hybrid of GUI, networked, and command-line tools. The DevOps philosophy encourages that we accomplish as much as possible with scripts (ideally checked into version control!).

The tools explored included Ansible, packer, vagrant and docker:

  • Ansible: to declare what gets put into the machine (alternatives include shell scripts, puppet etc. (For the TM351 monolithic VM, I used puppet.) End-users don’t need to know anything about Ansible, unless they want to develop a new, reproducible, custom environment.
  • packer: used to run the provisioners and construct and package up a base box. Again, end-users don’t need to know anything about this. (For the TM351 monolithic VM, I used vagrant to build a basebox in Virtualbox, and then package it; the power of Packer is that is lets you generate builds from a common source for a variety of platforms (AWS, Virtualbox, etc etc).)
  • vagrant: their description is quite a handy one: “a wrapper around virtualization software that automates the process of configuring and starting a VM from a special Vagrant box image … . It is an alternative to configuring the virtualization software using the GUI interface or the system-specific command line tools provided by systems like VirtualBox or Amazon. Instead, Vagrant looks for a Vagrantfile which defines the configuration, and also establishes the directory under which the vagrant command will connect to the relevant VM. This directory is, by default, synced to the guest VM, allowing the developer to edit the files with tools on their host OS. From the command-line (under this directory), the user can start, stop, or ssh into the Vagrant-managed VM. It should be noted that (again, like Packer) Vagrant does no work directly, but rather calls out to those other platform-specific command-line tools.” However, “while Vagrant is conceptually very elegant (and cool), we are not currently using it for BCE. In our evaluation, it introduced another piece of software, requiring command-line usage before students were comfortable with it”. This is one issue we are facing with the TM351 VM – current the requirement to use vagrant to manage the VM from the commandline (albeit this only really requires a couple of commands – we can probably get away with just: vagrant up && vagrant provision and vagrant suspend – but also has a couple of benefits, like being able to trivially vagrant ssh in to the VM if absolutely necessary…).
  • docker: was perceived as adding complexity, both computationally and conceptually: “Docker requires a Linux environment to host the Docker server. As such, it clearly adds additional complexity on top of the requirement to support a virtual machine. … the default method of deploying Docker (at the time of evaluation) on personal computers was with Vagrant. This approach would then also add the complexity of using Vagrant. However, recent advances with boot2docker provide something akin to a VirtualBox-only, Docker-specific replacement for Vagrant that eliminates some of this complexity, though one still needs to grapple with the cognitive load of nested virtual environments and tooling.” The recent development of Kitematic addresses some of the use-case complexity, and also provides GUI based tools for managing some of the issues described above associate with portmapping, file sharing etc. Support for linked container compositions (using Docker Compose) is still currently lacking though…

At the end of the day, Packer seems to rule them all – coping as it does with simple installation scripts and being able to then target the build for any required platform. The project homepage is here: Berkeley Common Environment and the github repo here: Berkeley Common Environment (Github).

The paper also reviewed another common environment – OSGeo. Once again built on top of a common Linux base, well documented shell scripts are used to define package installations: “[n]otably, the project uses none of the recent DevOps tools. OSGeo-Live is instead configured using simple and modular combinations of Python, Perl and shell scripts, along with clear install conventions and examples. Documentation is given high priority. … Scripts may call package managers, and generally have few constraints (apart from conventions like keeping recipes contained to a particular directory)”. In addition, “[s]ome concerns, like port number usage, have to be explicitly managed at a global level”. This approach contrasts with the approach reviewed in Distributing Software to Students in a BYOD Environment where Debian metapackages were used to create a common environment installation route.


The idea of a common environment is a good one, and that would work particularly well in a curriculum such as Computing, I think? One main difference between the BCE approach and the TM351 approach is that BCE is self-contained and runs a desktop environment within the VM, whereas the TM351 environment uses a headless VM and follows more of a microservice approach that publishes HTML based service UIs via http ports that can be viewed in a browser. One disadvantage of the latter approach is that you need to keep a more careful eye on port assignments (in the event of collisions) when running the VM locally.

What Happens When “Computers” Are Replaced by Tablets and Phones?

With personal email services managed online since what feels like forever (and probably is “forever”, for many users), personally accessed productivity apps delivered via online services (perhaps with some minimal support for in-browser, offline use) – things like Microsoft Office Online or Google Docs – video and music services provided via online streaming services, rather than large file downloads, image galleries stored in the cloud and social networking provided exclusively online, and in the absence of data about connecting devices (which is probably available from both OU and OU-owned FutureLearn server logs), I wonder if the OU strategists and curriculum planners are considering a future where a significant percentage of OUr distance education students do not have access to a “personal (general purpose) computer” onto which arbitrary software applications can be installed rather than from which they can simply be accessed, but do have access to a network connection via a tablet device, and perhaps a wireless keyboard?

And if the learners do have access to a desktop or laptop computer, what happens if that is likely to be a works machine, or perhaps a public access desktop computer (though I’m not sure how much longer they will remain around), probably with administrative access limits on it (if the OU IT department’s obsession with minimising general purpose and end-user defined computing is anything to go by…)

If we are to require students to make use of “installed software” rather than software that can be accessed via browser based clients/user interfaces, then we will need to ask the access question: is it fair to require students to buy a desktop computer onto which software can be installed purely for the purposes of their studies, given they presumably have access otherwise to all the (online) digital services they need?

I seem to recall that the OU’s student computing requirements are now supposed to be agnostic as to operating system (the same is not true internally, unfortunately, where legacy systems still require Windows and may even require obsolete versions of IE!;-) although the general guidance on the matter is somewhat vague and perhaps not a little out of date…?!

I wish I’d kept copies of OU computing (and network) requirements over the years. Today, network access is likely to come in the form of either wired, fibre, or wireless broadband access (the latter particularly in rural areas, (example) or (for the cord-cutters), a mobile/3G-4G connection; personal computing devices that connect to the network are likely to be smartphones, tablets, laptop computers, Chromebooks and their ilk, and gaming desktop machines. Time was when a household was lucky to have a single personal desktop computer, a requirement that became expected of OU students. I suspect that is still largely true… (the yoof’s gaming machine; the 10 year old “office” machine).

If we require students to run “desktop” applications, should we then require the students to have access to computers capable of installing those applications on their own computer, or should we be making those applications available in a way that allows them to be installed and run anywhere – either on local machines (for offline use), or on remote machines (either third party managed or managed by the OU) where a network connection is more or less always guaranteed?

One of the reasons I’m so taken by the idea of containerised computing is that it provides us with a mechanism for deploying applications to students that can be run in a variety of ways. Individuals can run the applications on their own computers, in the cloud, via service providers accessed and paid for directly by the students on a metered basis, or by the OU.

Container contents can be very strictly version controlled and archived, and are easily restored if something should go wrong (there are various ‘switch-it-off-and-switch-it-on-again’ possibilities with several degrees of severity!) Container image files can be distributed using physical media (USB memory sticks, memory cards) for local use, and for OU cloud servers, at least, those images could be pre-installed on student accessed container servers (meaning the containers can start up relatively quickly…)

If updates are required, these are likely to be lightweight – only those bits of the application that need updating will be updated.

At the moment, I’m not sure how easy it is to arbitrarily share a data container containing a student’s work with application containers that are arbitrarily launched on various local and remote hosts? (Linking containers to Dropbox containers is one possibility, but they would perhaps be slow to synch? Flocker is perhaps another route, with its increased emphasis on linked data container management?)

If any other educational institutions, particularly those involved in distance education, are looking at using containers, I’d be interested to hear what your take is…

And if any folk in the OU are looking at containers in any context (teaching, research, project work), please get in touch – I need folk to bounce ideas around with, sanity check with, and ask for technical help!;-)

Festival Segregation

Isle of Wight Festival time again, and some immediate reflections from the first day…

I seem to remember a time-was-when festival were social levellers – unless you were crew or had a guest pass that got you backstage. Then the backstage areas started to wend their way up the main stage margins so the backstage guests could see the stage from front-of-stage. Then you started to get the front-of-stage VIP areas with their own bars, and a special access area in front of the stage to give you a better view and keep you away from the plebs.

There has also been a growth in other third party retailed add-ons – boutique camping, for example:

Isle_of_Wight_Festival_2015_-_11th-14th_June

and custom toilets:

Isle_of_Wight_Festival_2015

One of the things I noticed about the boutique camping areas (which are further distinguished from the VIP camping areas…) was that they are starting to include their own bars, better toilets, and so on. Gated communities, for those who can afford a hefty premium on top of the base ticket price. Or a corporate hospitality/hostility perk.

I guess festivals always were a “platform” creating two sided markets that could sell tickets to punters, location to third party providers (who were then free to sell goods and services to the audience), sponsorship of every possible surface. But the festivals were, to an extent, open; level-playing fields. Now they’re increasingly enclosed. So far, the music entertainment has remained free. But how long before you have to start paying to access “exclusive” events in some of the music tents?

PS I wonder: when it comes to eg toilet capacity planning, are the boutique poo-stations over-and-above capacity compared to the capacity provided by the festival promoter to meet sanitation needs, or are they factored in as part of that core capacity? Which is to say, if no-one paid the premium, would the minimum capacity requirements still be met?

PPS I also note that the IW Festival had a heliport this year (again…?)

PPPS On the toilet front, the public toilets all seemed pretty clean this year… and what really amused me was seeing a looooonnngggg queue for the purchased-access toilets…