Category: Anything you want

“Student for Life” – A Lifelong Learning Relationship With Your University… Or Linked In?

I’ve posted several times over the years wondering why universities don’t try to reimagine themselves as see undergrad degrees as a the starting point for a lifelong relationship as an educational service provider with their first-degreee alumni.

A paragraph in an recent Educause Review article Data, Technology, and the Great Unbundling of Higher Education (via @Downes [commentary]) caught my eye this morning:

In an era of unbundling, when colleges and universities need to move from selling degrees to selling EaaS subscriptions, the winners will be those that can turn their students into “students for life” — providing the right educational programs and experiences at the right time.

On a quick read, there’s a lot in the article I don’t like, even though it sounds eminently reasonable, positing a competency based “full-stack model to higher education” in which providers will (1) develop and deliver specific high-quality educational experiences that produce graduates with capabilities that specific employers desperately want; (2) work with students to solve financing problems; and (3) connect students with employers during and following the educational experience and make sure students get a job.

The disruption that HE faces, then, is not one about course delivery, but rather one about life-as-career management?

What if … the software that will disrupt higher education isn’t courseware at all? What if the software is, instead, an online marketplace? Uber (market cap $40 billion) owns no vehicles. Airbnb (market cap $10 billion) owns no hotel rooms. What they do have are marketplaces with consumer-friendly interfaces. By positioning their interfaces between millions of consumers and sophisticated supply systems, Uber and Airbnb have significantly changed consumer behavior and disrupted these supply systems.

Is there a similar marketplace in the higher education arena? There is, and it has 40 million college students and recent graduates on its platform. It is called LinkedIn.

Competency marketplaces will profile the competencies (or capabilities) of students and job seekers, allow them to identify the requirements of employers, evaluate the gap, and follow the educational path that gets them to their destination quickly and cost-effectively. Although this may sound like science fiction, the gap between the demands of labor markets and the outputs of our educational system is both a complex sociopolitical challenge and a data problem that software, like LinkedIn, is in the process of solving. …

(I’m not sure if I don’t like the article because I disagree with it, or because it imagines a future that is one that I’d rather not see play out: the idea that learners don’t develop a longstanding relationship with a particular university, and consequently miss out on the social and cultural memories and relationships that develop therein, but instead taking occasional offerings from a wide a variety of providers and instead having their long term relationship with someone like LinkedIn, feels like something will be lost to me. Martin Weller captures some of it, I think, in his reflection yesterday on Product and process in higher ed in terms of how the “who knows?!” answer to the “what job are you going to do with that?” question about a particular degree becomes a nonsense answer, because the point of the degree has become just that: getting a particular job. Rather than taking a degree to widen your options, the degree becomes one of narrowing them down?! Maybe the first degree should be about setting yourself up to becoming a specialist over the course of occasional and extended education over a lifetime?)

Furthermore, I get twitchy about this being another example of a situation where it’s tradable personal data that’s the valuable bargaining chip:

To avoid marginalization, colleges and universities need to insist that individuals own their competencies. Ensuring that ownership lies with the individual could make the competency profile portable and could facilitate movement across marketplaces, as well as to higher education institutions.

(As for how competencies are recognised, and fraud avoided in terms of folk claiming a competency that hasn’t be formally qualified, I’ve pondered this before, eg in the context of Time to build trust with an open achievements API?, or this idea for a Qualification Verification Service. It seems to me that universities don’t see it as their business proving that folk have the qualifications or certificates they’ve been awarded – which presumably means that if it does become yet another part of the EaaS marketplace, it’ll be purely corporate commercial interests that manage it.)

We’ll see….

Getting Your Own Space on the Web…

To a certain extent, we can all be web publishers now: social media let’s us share words, pictures and videos, online office suites allow us to publish documents and spreadsheets, code repositories allow us to share code, sites like Shinyapps.io allow you to publish specific sorts of applications, and so on.

So where do initiatives like a domain of one’s own come in, which provide members of a university (originally), staff and students alike, with a web domain and web hosting of their own?

One answer is that they provide a place on the web for you to call your own. With a domain name registered (and nothing else – not server requirements, no applications to install) you can set up an email address that you and you alone own and use it to forward mail sent to that address to any other address. You can also use your domain as a forwarding address or alias for other locations on the web. My ouseful.info domain forwards traffic to a blog hosted on wordpress.com (I pay WordPress for the privilege of linking to my site there with my domain address); another domain I registered – f1datajunkie.com – acts as an alias to a site hosted on blogger.com.

The problem with using my domains like this mean that I can only forward traffic to sites that other people operate – and what I can do on those sites is limited by those providers. WordPress is a powerful web publishing platform, but WordPress.com only offers a locked down experience with no allowances for customising the site using your own plugins. If I paid for my own hosting, and ran my own WordPress server, the site could be a lot richer. But then in turn I would have to administer the site for myself, running updates, being responsible – ultimately – for the security and resource provisioning of the site myself.

Taking the step towards hosting your own site is a big one, for many people (I’m too lazy to host my own sites, for example…) But initiatives like Reclaim Hosting, and more recently OU Create (from Oklahoma University, not the UK OU), originally inspired by a desire to provide a personal playspace in which students could explore their own digital creativity and give them a home on the web in which they could run their own applications, eased the pain for many: the host could also be trusted, was helpful, and was affordable.

The Oklahoma create space allows students to register a subdomain (e.g. myname.oucreate.com) or custom domain (e.g. ouseful.info) and associate it with what is presumably a university hosted serverspace into which users can presumably install their own applications.

So it seems to me we can tease apart two things:

  • firstly, the ability to own a bit of the web’s “namespace” by registering your own domain (ouseful.info, for example);
  • secondly, the ability to own a bit of the web’s “functionality space”: running your own applications that other people can connect to and make use of; this might be running your own possibly blogging platform, possibly customised using your own, or third party, extensions, or it might be running one or more custom applications you have developed on your own.

But what if you don’t want the responsibility of running, and maintaining, your own applications day in, day out? What if you only want to share an application to the web for a short period of time? What if you want to be able to “show and tell” and application for a particular class, and then put it back on the shelf, available to use again but not always running? Or what if you want to access an application that might be difficult to install, or isn’t available for your computer? Or you’re running a netbook or tablet, and the application you want isn’t available as an app, just as a piece of “traditionally installed software”?

I’ve started to think that docker style containers may offer a way of doing this. I’ve previously posted a couple of examples of how to run RStudio or OpenRefine via docker containers using a cloud host. How much nicer it would be if I could run such containers on a (sub)domain of my own running via a university host…

Which is to say – I don’t necessarily want a full hosting solution on a domain of my own, at least, not to start with, but I do want to be able to add my own bits of functionality to the web, for short periods of time at the least. That is, what I’d quite like is a convenient place to “publish” (in the sense of “run”) my own containerised apps; and then rip them down. And then, perhaps at a later date, take them away and run them on my own fully hosted domain.

Seven Graphical Interfaces to Docker

From playing with docker over the last few weeks, I think it’s worth pursuing as a technology for deploying educational software to online and distance education students, not least because it offers the possibility of using containers as app runners than can run an app on your own desktop, or via the cloud.

The command line is probably something of a blocker to users who expect GUI tools, such as a one-click graphical installer, or double click to start an app, so I had a quick scout round for graphical user interfaces in and around the docker ecosystem.

I chose the following apps because they are directed more at the end user – launching prebuilt apps, an putting together simple app compositions. There are some GUI tools aimed at devops folk to help with monitoring clusters and running containers, but that’s out of scope for me at the moment…

1. Kitematic

Kitematic is a desktop app (Mac and Windows) that makes it one-click easy to download images from docker hub and run associated containers within a local docker VM (currently running via boot2docker?).

I’ve blogged about Kitematic several times, but to briefly summarise: Kitematic allows you to launch and configure individual containers as well as providing easy access to a boo2docker command line (which can be used to run docker-compose scripts, for example). Simply locate an image on the public docker hub, download it and fire up an associated container.

Kitematic_and_Pinboard__bookmarks_for_psychemedia_tagged__docker_

Where a mount point is defined to allow sharing between the container and the host, you can simply select the desktop folder you want to mount into the container.

At the moment, Kitematic doesn’t seem to support docker-compose in a graphical way, or allow users to deploy containers to a remote host.

2. Panamax

panamax.io is a browser rendered graphical environment for pulling together image compositions, although it currently needs to be started from the command line. Once the application is up and running, you can search for images or templates:

Panamax___Search

Templates seem to correspond to fig/docker compose like assemblages, with panamax providing an environment for running pre-existing ones or putting together new ones. I think the panamax folk ran a competition some time ago to try to encourage folk to submit public templates, but that doesn’t seem to have gained much traction.

Panamax___Search2

Panamax supports deployment locally or to a remote web host.

Panamax___Remote_Deployment_Targets

When I first came across docker, I found panamax really exciting becuase of the way it provided support for linking containers. Now I just wish Kitematic would offer some graphical support for docker compose that would let me drag different images into a canvas, create a container placeholder each time I do, and then wire the containers together. Underneath, it’d just build a docker compose file.

The public project files is useful – it’d be great to see more sharing of general useful docker-compose scripts and asscociated quick-start tutorials (eg WordPress Quickstart With Docker).

3. Lorry.io

lorry.io is a graphical tool for building docker compose files, but doesn’t have the drag, drop and wire together features I’d like to see.

Lorry.io is published by CenturyLink, who also publish panamax, (lorry.io is the newer development, I think?)

Lorry_io_-_Docker_Compose_YAML_Editor

Lorry_io_-_Docker_Compose_YAML_Editor2

Lorry.io lets you search specify your own images or build files, find images on dockerhub, and configure well-formed docker compose YAML scripts from auto-populated drop down menu selections which are sensitive to the current state of the configuration.

4. docker ui

docker.ui is a simple container app that provides an interface, via the browser, into a currently running docker VM. As such, it allows you to browse the installed images and the state of any containers.

Docker_Hub

Kitematic offers a similar sort of functionality in a slightly friendlier way. See additional screenshots here.

5. tutum Cloud

I’ve blogged about tutum.co a couple of times before – it was the first service that I could actually use to get containers running in the cloud: all I had to do was create a Digial Ocean account, pop some credit onto it, then I could link directly to it from tutum and launch containers on Digital Ocean directly from the tutum online UI.

New_Service_Wizard___Tutum

I’d love to see some of the cloud deployment aspects of tutum make it into Kitematic…

See also things like clusteriup.io

6. docker Compose UI

The docker compose UI looks as if it provides a browser based interface to manage deployed container compositions, akin to some of the dashboards provided by online hosts.

francescou_docker-compose-ui

I couldn’t get it to work… I get the feeling it’s like the docker ui but with better support for managing all the containers associated with a particular docker-compose file.

7. ImageLayers

Okay – I said I was going to avoid devops tools, but this is another example of a sort of thing that may be handy when trying to put a composition of several containers together because it might help identify layers that can be shared across different images.

imagelayers.io looks like it pokes through the Dockerfile of one or more containers and shows you the layers that get built.

ImageLayers___A_Docker_Image_Visualizer

I’m not sure if you can point it at a docker-compose file and let it automatically pull out the layers from identified sources (images, or build sources)?

Robot Journalism in Germany

By chance, I came across a short post by uber-ddj developer Lorenz Matzat (@lorz) on robot journalism over the weekend: Robot journalism: Revving the writing engines. Along with a mention of Narrative Science, it namechecked another company that was new to me: [b]ased in Berlin, Retresco offers a “text engine” that is now used by the German football portal “FussiFreunde”.

A quick scout around brought up this Retresco post on Publishing Automation: An opportunity for profitable online journalism [translated] and their robot journalism pitch, which includes “weekly automatic Game Previews to all amateur and professional football leagues and with the start of the new season for every Game and detailed follow-up reports with analyses and evaluations” [translated], as well as finance and weather reporting.

I asked Lorenz if he was dabbling with such things and he pointed me to AX Semantics (an Aexea GmbH project). It seems their robot football reporting product has been around for getting on for a year (Robot Journalism: Application areas and potential[translated]) or so, which makes me wonder how siloed my reading has been in this area.

Anyway, it seems as if AX Semantics have big dreams. Like heralding Media 4.0: The Future of News Produced by Man and Machine:

The starting point for Media 4.0 is a whole host of data sources. They share structured information such as weather data, sports results, stock prices and trading figures. AX Semantics then sorts this data and filters it. The automated systems inside the software then spot patterns in the information using detection techniques that revolve around rule-based semantic conclusion. By pooling pertinent information, the system automatically pulls together an article. Editors tell the system which layout and text design to use so that the length and structure of the final output matches the required media format – with the right headers, subheaders, the right number and length of paragraphs, etc. Re-enter homo sapiens: journalists carefully craft the information into linguistically appropriate wording and liven things up with their own sugar and spice. Using these methods, the AX Semantics system is currently able to produce texts in 11 languages. The finishing touches are added by the final editor, if necessary livening up the text with extra content, images and diagrams. Finally, the text is proofread and prepared for publication.

A key technology bit is the analysis part: “the software then spot patterns in the information using detection techniques that revolve around rule-based semantic conclusion”. Spotting patterns and events in datasets is an area where automated journalism can help navigate the data beat and highlight things of interest to the journalist (see for example Notes on Robot Churnalism, Part I – Robot Writers for other takes on the robot journalism process). If notable features take the form of possible story points, narrative content can then be generated from them.

To support the process, it seems as if AX Semantics have been working on a markup language: ATML3 (I’m not sure what it stands for? I’d hazard a guess at something like “Automated Text ML” but could be very wrong…) A private beta seems to be in operation around it, but some hints at tooling are starting to appear in the form of ATML3 plugins for the Atom editor.

One to watch, I think…

WordPress Quickstart With Docker

I need a WordPress install to do some automated publishing tests, so had a little look around to see how easy it’d be using docker and Kitematic. Remarkably easy, it turns out, once the gotchas are sorted. So here’s the route in four steps:

1) Create a file called docker-compose.yml in a working directory of your choice, containing the following:

somemysql:
  image: mysql
  environment:
    MYSQL_ROOT_PASSWORD: example
    
somewordpress:
  image: wordpress
  links:
    - somemysql:mysql
  ports:
    - 8082:80

The port mapping sets the WordPress port 80 to be visible on host at port 8082.

2) Using Kitematic, launch the Kitematic command-line interface (CLI), cd to your working directory and enter:

docker-compose up -d

(The -d flag runs the containers in detached mode – whatever that means?!;-)

3) Find the IP address that Kitematic is running the VM on – on the command line, run:

docker-machine env dev

You’ll see something like export DOCKER_HOST="tcp://192.168.99.100:2376" – the address you want is the “dotted quad” in the middle; here, it’s 192.168.99.100

4) In your browser, go to eg 192.168.99.100:8082 (or whatever values your setup us using) – you should see the WordPress setup screen:

WordPress_›_Installation

Easy:-)

Here’s another way (via this docker tutorial: wordpress):

i) On the command line, get a copy of the MySQL image:

docker pull mysql:latest

ii) Start a MySQL container running:

docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=example -d mysql

iii) Get a WordPress image:

docker pull wordpress:latest

iv) And then get a WordPress container running, linked to the database container:

docker run --name wordpress-instance --link some-mysql:mysql -p 8083:80 -d wordpress

v) As before, lookup the IP address of the docker VM, and then go to port 8083 on that address.

FOI and Communications Data

Last week, the UK Gov announced an Independent Commission on Freedom of Information (written statement) to consider:

  • whether there is an appropriate public interest balance between transparency, accountability and the need for sensitive information to have robust protection
  • whether the operation of the Act adequately recognises the need for a ‘safe space’ for policy development and implementation and frank advice
  • the balance between the need to maintain public access to information, the burden of the Act on public authorities and whether change is needed to moderate that while maintaining public access to information

To the, erm, cynical amongst us, this could be interpreted as the first step of a government trying to make it harder access public information about decision making processes (that is, a step to reduce transparency), though it would be interesting if the commission reported that making more information available proactively available as published public documents, open data and so was an effective route to reducing the burden of FOIA on local authorities.

One thing I’ve been meaning to do for a long time is have a proper look at WhatDoTheyKnow, the MySociety site that mediates FOI requests in a public forum, as well as published FOI disclosure logs, to see what the most popular requests are by sector to see whether FOI requests can be used to identify datasets and other sources of information that are commonly requested and, by extension, should perhaps be made available proactively (for early fumblings, see FOI Signals on Useful Open Data? or The FOI Route to Real (Fake) Open Data via WhatDoTheyKnow, for example).

(Related, I spotted this the other day on the Sunlight Foundation blog: Pilot program will publicize all FOIA responses at select federal agencies: Currently, federal agencies are only required to publicly share released records that are requested three or more times. The new policy, known as “release to one, release to all,” removes this threshold for some agencies and instead requires that any records released to even one requester also be posted publicly online. I’d go further – if the same requests are made repeatedly (eg information about business rates seems to be one such example) the information should be published proactively.)

In a commentary on the FOI Commission, David Higgerson writes (Freedom of Information Faces Its Biggest THreat Yet – Here’s Why):

The government argues it wants to be the most transparent in the world. Noble aim, but the commission proves it’s probably just words. If it really wished to be the most transparent in the world, it would tell civil servants and politicians that their advice, memos, reports, minutes or whatever will most likely be published if someone asks for them – but that names and any references to who they are will be redacted. Then the public could see what those working within Government were thinking, and how decisions were made.

That is, the content should be made available, but the metadata should be redacted. This immediately put me in mind of the Communications Data Bill, which is perhaps likely to resurface anytime soon, that wants to collect communications metadata (who spoke to whom) but doesn’t directly let folk get to peak at the content. (See also: From Communications Data to #midata – with a Mobile Phone Data Example. In passing, I also note that the cavalier attitude of previous governments to passing hasty, ill-thought out legislation in the communications data area at least is hitting home. It seems that the Data Retention and Investigatory Powers Act (DRIPA) 2014 is “inconsistent with European Union law”. Oops…)

Higgerson also writes:

Politicians aren’t stupid. Neither are civil servants. They do, however, often tend to be out of touch. The argument that ‘open data’ makes a government transparent is utter bobbins. Open data helps people to see the data being used to inform decisions, and see the impact of previous decisions. It does not give context, reasons or motives. Open data, in this context, is being used as a way of watering down the public’s right to know.

+1 on that… Transparency comes more from the ability to keep tabs on the decision making process, not just the data. (Some related discussion on that point here: A Quick Look Around the Open Data Landscape.)

Tata F1 Connectivity Innovation Prize, 2015 – Telemetry Dash

I’ve run out of time trying to put together an entry for this year’s first round of the Tata F1 Connectivity Innovation Prize (brief [PDF]), which this year sets the challenge of designing a data dashboard that displays the following information:

prize_tatacommunications_com_Challenge_1_Brief_for_the_F1_Connectivity_Innovation_Prize_pdf

Just taking at the data as presented turns it into somethign of an infographic design exercise (I wonder if anyone will submit entries using infogr.am?!) but the reality is much more that it needs to be a real-time telemetry dashboard.

My original sketch is a just a riff on the data as given:

dash1

(From the steering angle range, I guess straight ahead must be 90 degrees?! Which would put 12 degrees as a stupidly left turn? Whatever… you get the idea!)

Had I had time, I think I’d have extended this to include historical traces of some of the data, eg using something like the highcharts dynamic demo that could stream a recent history of particular values, perhaps taking inspiration too from Making Sense of Squiggly Lines. One thing I did think might be handy in this context were “sample and hold” colour digit or background alerts which would retain a transient change for a second or two – for example, recording that the steering wheel had been given a quick left-right – that could direct attention to the historical trace if the original incident was missed or needed clarification.

The positioning RPM then throttle is based on the idea that the throttle is a request for revs. Putting throttle (racing green for go) and brake (red for stop) next to each other allows control commands to be compared, and putting brake and speed (Mercedes silver/grey – these machines are built for speed) next to each other is based on the idea you brake to retard the vehicle (i.e. slow it down). (I also considered the idea of the speed indicator as a vertical bar close to the gear indicator, but felt that if the bars are rapidly changing, which they are likely to do, it could be quite jarring if vertical bars are going up and down t right angles to each other? What I hope the current view would do is show more of a ratchet effect across all the bars?) The gear indicator helps group these four indicators together. (I think that white background should actually be yellow?) In the event of a gear being lost, the colour of that gear number could fade further in grey towards black. A dot to the right of the scale could also indicate the ideal gear to be in at any particular point.

The tyre display groups the tyres and indicates steering angle as well as tyre temperature colour coded according to a spectrum colour scale. (The rev counter is also colour coded.) The temperature values are also displayed in an grid to allow for easy comparison, and again are match-colour coded. The steering angle is also displayed as a numerical indicator value, and further emphasised by the Mercedes logo (Mercedes are co-sponsoring the competition, I think? That said, I suspect their brand police, if they are anything like the OU’s, may have something to say about tilting the logo though?!) The battery indicator (CC: “Battery icon” by Aldendino) is grouped with the tyres on the grounds that battery and tyres are both resources that need to be managed.

In additional material, I’d possibly also have tried to demo some alerts, such as an overcooked tyre (note the additional annotation that that should have been in the original showing the degrees C unit):

Presentation1

and perhaps also included a note about possible additional channels – hinting at tyre pressure based on the size of each tyre, perhaps, or showing where another grid showing individual tyre pressure might go, or (more likely) assuming a user-interactive display, a push to toggle view, or even a toggling display, that shows tyre pressure or pressure in the same location at different times. There probably also needs to be some sort of indication of brake balance in there too – perhaps a dot that moves around the central grid cross, perhaps connected by a line to the origin of the cross?

The brief also asks for some info about refresh rates – Tata are in the comms business after all… I guess things to take into account are the bandwidth of the telemetry from the car (2 megabits per second looks reasonable?), the width of the data from each sensor, along with their sampling rates (info from ECU specs) and perhaps a bit of psychology (what sorts of refresh rate can folk cope with when numerical digit displays update, for example (e.g. when watching a time code on a movie?). Maybe also check out some bits and pieces on realtime dashboard design) and example race dashboard designs to see what sorts of metaphor or common design styles are likely to be familiar to team staff (and hence not need as much decoding). Looking back at last year’s challenge might also be useful. E.g. the timing screen whose data feed was a focus there used a black background and a palette of white, yellow, green, purple, cyan and red. There are conventions associated with those colours that could perhaps be drawn on here. (Also, using those colours perhaps make sense in that race teams are likely to be familiar with distinguishing those colours and associating meaning with them.)

I’ve never really tried to put a dashboard together… There’s lots to consider, isn’t there?!