Category: Tinkering

A First Attempt at An Amazon Echo Alexa Skills App Using Python: Parlibot, A UK Parliament Agent

Over the last couple of years, I’ve been dabbling with producing simple textual reports from datasets that can be returned in response to simple natural language style queries using chat interfaces such as Slack (for example, Sketching a Slack Slash Parliamentary Auto-Responder Using AWS Lambda Functions). The Amazon Echo, which  launches in the UK at the end of September, provides another context for publishing natural languages style responses, in this case in the form of spoken responses to spoken requests.

In the same way that apps brought a large amount of feature level functionality to mobile phones, the Amazon Echo provides an opportunity for publishers to develop “skills” that can respond to particular voice command issued within hearing of the Echo. Amazon is hopeful that one class of commands  –  Smart Home Skills – will be used to bootstrap a smart home ecosystem that allows you to interact with smart-home devices though voice commands, such as commands to turn your lights on and off, or questions about the status of your home, (“is the garage door still open?”, for example). Another class of services relate to more general information based services, or even games, which can be developed using a second API environment, the Alexa Skills KitFor a full range of available skills, see the Alexa Skills Store.

The Alexa Skills Kit has a similar sort of usability to other AWS services (i.e. it’s a bit rubbish…), but I thought I’d give it a go repurposing some old functions around the UK Parliament API, such as finding out which committees a particular MP sits on, or who are the members of a particular committee, as well as some new ones.

For example, I thought it might be amusing to try to implement a skill that could respond to questions like the following :

  • what written statements were published last week?
  • were there any written statements published last Tuesday?

using some of the “natural language” date-related Python functions I dabbled with yesterday.

One of the nice things about the Alexa Skills API is that it also supports conversational contexts. For example, an answer to one of the above questions (generated by my code) might take the form “There were 27 written statements published then”, but session state associated with that response can also be passed back as metadata to the Alexa service, and then returned from Alexa as session metadata attached to a follow-up question. The answer to the follow-up questions that can then draw on context generated earlier in the conversation. So for example, exchanges such as the following now become possible:

  • Q: were there any written statements published last Tuesday?
  • A: There were 27 written statements published then. Do you want to know them all?
  • Q: No, just the ones from DCLG.
  • A: Okay, there were three written statements issued by the Department for Communities and Local Government last week. One on …. by….; etc etc 

So how can we build an Alexa Skill? I opted for implementing one using Python, with the answer engine running on my Reclaim Hosting webserver rather than as an AWS Lambda Function, which I think Amazon would prefer. (The AWS Lambda functions are essentially free, but it means you have to go through the pain of using another AWS service.) For an example of getting a Python application up and running on your own web host using cPanel, see here.

To make life simpler, I installed the Flask-ASK library (docs), which extends the Flask web application framework so that it plays nicely with the Alexa Skills API. (There’s a standalone tutorial that runs without the need for any web hosting described here: Flask-Ask: A New Python Framework for Rapid Alexa Skills Kit Development.)

The Flask-Ask library allows you to create two sorts of response types in your application that can respond to “intents” defined as part of the Alexa skill itself:

  • a statement, which is a response Alexa that essentially closes a session;
  • and a question, which keeps the session open and allows you to pop session state into the response so you can get it back as part of the next intent command issued from Alexa in that conversation.

The following bit of code shows how to decorate a function that will handle a particular Alexa Skill intent. The session variable can be used to pass session state back to Alexa that can be returned as part of the next intent. The question() wrapper packages up the response (txt) appropriately and keeps the conversational session alive.

def writtenStatement(period,myperiod):
    session.attributes['period'] = period
    session.attributes['myperiod'] = myperiod
    session.attributes['typ'] = 'WrittenStatementIntent'
    if tmp!='': txt='{} Do you want to hear them all?'.format(txt)
    else: txt='I don't know of any.'
    return question(txt)

We might then handle a response identified as to the affirmative (“yes, tell me them all”) using something like the following, which picks up the session state from the response, generates a paragraph describing all the written statements and returns it, suitably packaged, as a session ending statement().

def sayThemAll():
    period= session.attributes['period']
    myperiod= session.attributes['myperiod']
    return statement(tmp)

So how do we define things on the Alexa side?  (An early draft of my config can be found here.) To start with, we need to create a new skill and give it a name. A unique ID is created for the application that is passed in all service requests that we can use a key to decide whether or not to accept and respond to a request from the Alexa Skill server in our application logic. (For convenience, I defined an open service that can accept all requests. I’m not sure if Flask-Ask has a setting that allows the application to be tied to one or more Alexa Skill IDs?)


The second thing we need to do is actually define the interactions that the skill will engage in. This is composed of three parts:

  • an Intent Schema, defined as a JSON object, that specifies a list of intents that the skill can handle. Each intent must be given a unique label (for example, “AllOfThemIntent”), and may be associated with one or more slots. Each slot has a name and a type. The name corresponds to the name of a variable that may be captured and passed (under that name) to the application handler; the type is either a predefined Amazon data type (for example,  AMAZON.DATE, which captures date like things (including some simple natural language date terms, such as yesterday) or a custom data type;
  • one or more user-defined custom data types, defined as a list of keywords that Alexa will try to match exactly (I think? I don’t think fuzzy match, partial match or regular expression matching is supported? If it is, please let me know how via the comments…)
  • some sample utterances, keyed by intent and giving an example of a phrase that the skill should be able to handle; slots may be included in the example utterances, using the appropriate name as provided in the corresponding intent definition.


In the above case, I start to define a conversation where a WrittenStatementIntent is intended to identify written statements published on a particular day or over a particular period, and then a follow up AllOfThemIntent can be used to list the details of all of them or a  LimitByDeptIntent can be used to limit the reporting to just statements from a specific department.

When you update the interaction model, it needs rebuilding which may take some time (wait for the spinny thing over the Interaction Model menu item to stop before you try to test anything).

The next part of the definition is used to specify where the application logic can be found. As mentioned, this may be defined as an AWS Lambda function, or you can host it yourself on an https server. In the latter case, for a Flask app, you need to provide a URL where the root of the application is served from.


If you are using your own host, you need to provide some information about the trust certificate. I published my application logic as an app on Reclaim Hosting, which appears to offer https out of the can (though I haven’t tried it for a live/published Alexa skill yet.)


With the config stuff all in place, you now just need to make sure some application logic is in place to handle it.

For reference, along with the stub of application logic shown above (which just needs a dummy statementGrabber() function that optionally accepts a couple of arguments and that returns a couple of text strings for testing purposes) I also topped my application with the following set-up components (note that as part of the WSGI handling that cPanel uses to run the app, I am creating an application variable that points to it).

import logging
from random import randint
from flask import Flask, render_template
from flask_ask import Ask, statement, question, session

app = Flask(__name__)


ask = Ask(app, "/")

At the end of the application code, we can fire it up…

if __name__ == '__main__':

Get the app running on the server, and now we can test it from the Alexa Skills environment. Unlike deployed skills accessed via the echo, we don’t need to “summon” the app for testing purposes – we can just enter the utterance directly. The JSON code passed to the server is displayed as the Service Request and the Service Response from the application server is also displayed.



The test panel can also handle conversations established by using Flask-Ask question() wrappers, as shown below:


In this case, we filter down on the written statements for last Thursday to just report on the ones issued by the Department for Culture, Media and Sport.

It’s worth noting that Alexa seems to have a limit on the number of  characters allowed when generating a voice output (8000 characters). For large responses, this suggests that adding some sort of sensible paging handler to the application logic could make sense if you need to return a large response; for example, something that chunks up up the response, tells it you piece by piece, and prompts you between each chunk to check you want to hear the next part.

With testing done, and a working app up and running, all that remains is to go through the legal fluff reuiqred to submit the app for publishing (which I haven’t done; a note says you can’t edit the app whilst it’s undergoing approval, but I;m not sure if you can then go back to editing it once it is published?)

A couple of things I learned along the way: firstly, when defining slots, it can be useful to have a controlled vocabulary to hand. For Parliament, things like the Members’ API Reference Data Service can be handy, eg for generating a list of MP names or committee names (in another post I’ll give some more examples about some of the queries I can run). Secondly, when thinking about conversation design, you need to think about the various bits of state than can be associated with a conversation. For example, when making a query about an MP, it makes sense to retain the name (or an identifier for) the MP as part of the session state so that you can refer to that later. If a conversation went “who is the MP for the Isle of Wight?”, “what committees are they on?”, “who else is on those committees?” , it would make sense to capture the list of committees as state somehow when responding to the second question.

One approach I took to managing state within the application was to cache calls to URLs requested in forming the response to one question. If I preserved enough session state to allow me to pull that cached data, I could reanalyse it without having to re-request it from the original URL when putting together a response to a follow up question.

Something it would be nice to have is a list of synonyms for terms in the slots definition, and maybe even a crude lookup that could be used as part of an OpenRefine style reconciliation service to try to partially match slot terms. (I’m not sure how well the model building does this anyway, eg if you put near misses in the slot definitions; or whether it just does exact matching?)

Another takeaway is that it probably makes sense to try to design the code for generating text from data or APIs so that it can be used in a variety of contexts – Slack, Alexa/Echo, email, press release generation, etc, – without much, if any, retooling. Ideally, it would make sense to define a set of test generation functions or API calls that could in turn be called via use-case application wrappers (eg one for Slack, one for Alexa, etc). Issues arise here when it comes to conversation management. Alexa manages conversations via session state, for example. But maybe can help here, by acting as application independent conversational middleware? That’ll be the next app I need to play with…

PS If you would like to see further posts here exploring Amazon Echo/Alexa skills, why not help me explore the context and gift me an Echo from my Patronage Wishlist?

“Natural Language” Time Periods in Python

Mulling over a search feed that includes date range limits, I had a quick look for a python library that includes “natural language” functions for describing different date ranges. Not finding anything offhand, I popped some quick starter-for-ten functions up at this gist, which should also be embedded below.

It includes things like today(), tomorrow(), last_week(), later_this_month() and so on.

If you know of a “proper” library that does this, please let me know via the comments…

Creating a Simple Python Flask App via cPanel on Reclaim Hosting

I’ve had my Reclaim Hosting package for a bit over a year now, and now really done anything with it, so I had a quick dabble tonight looking for a way of installing and running a simple Python Flask app.

Searching around, it seems that CPanel offers a way in to creating a Python application:


Seems I then get to choose a python version that will be installed into a virtualenv for the application. I also need to specify the name of a folder in which the application code will live and select the domain and path I want the application to live at:


Setting up the app generates a folder into which to put the code, along with a public folder (into which resources should go) and a file that is used by a piece of installed sysadmin voodoo magic (Phusion Passenger) to actually handle the deployment of the app. (An empty folder is also created in the public_html folder corresponding to the app’s URL path.)


Based on the Minimal Cyborg How to Deploy a Flask Python App for Cheap tutorial, needs to link to my app code.

Passenger is a web application server that provides a scriptable API for managing the running of web apps (Passenger/py documentation).

For runnin Pyhon apps, we  is used to launch the applicationif you change the wsgi file, I think yo

A flask app is normally called by running a command of the form python on the commandline. In the case of a python application, the Passenger web application manager uses a file associated with the application to manage it. In the case of our simple Flask application, this corresponds to creating an object called application  that represents it. If we create an application in a file, and create a variable application that refers to it, we can run it via the file by simply importing it: from myapp import application.

WSGI works by defining a callable object called application inside the WSGI file. This callable expects a request object, which the WSGI server provides; and returns a response object, which the WSGI server serializes and sends to the client.

Flask’s application object, created by a MyApp = Flask(__name__) call, is a valid WSGI callable object. So our WSGI file is as simple as importing the Flask application object (MyApp) from, and calling it application.

But first we need to create the application – for our demo, we can do this using a single file in the app directory. First create the file:


then open it in the online editor:


Borrowing the Minimal Cyborg “Hello World” code:

from flask import Flask
app = Flask(__name__)
application = app # our hosting requires application in passenger_wsgi

def hello():
    return "This is Hello World!\n"

if __name__ == "__main__":

I popped it into the file and saved it.

(Alternatively, I could have written the code in an editor on my desktop and uploaded the files.)

We now need to edit the  file so that it loads in the app code and gets from it an object that the Passenger runner can work with. The simplest approach seemed to be to load in the file (from myapp) and get the variable pointing to the flask application from it (import application). I think that Passenger requires the object be made available in a variable called application?


That is, comment out the original contents of the file (just in case we want to crib from them later!) and import the application from the app file: from myapp import application.

So what happens if I now try to run the app?


Okay – it seemed to do something but threw an error – the flask package couldn’t be imported. Minimal Cyborg provides a hint again, specifically “make sure the packages you need are installed”. Back in the app config area, we can identify packages we want to add, and then update the virtualenv used for the app to install them.

cpanel_-_setup_python_app2And if we now try to run the app again:

So now it seems I have a place I can pop some simple Python apps – like some simple Slack/slash command handlers, perhaps…

PS if you want to restart the application, I’m guessing all you have to do is click the Restart button in the appropriate Python app control panel.

Simple Demo of Green Screen Principle in a Jupyter Notebook Using MyBinder

One of my favourite bits of edtech  in the form of open educational technology infrastucture at the moment is mybinder (code), which allows you to fire up a semi-customised Docker container and run Jupyter notebooks based on the contents of a github repository. This makes is trivial to share interactive, Jupyter notebook demos, as long as you’re happy to make your notebooks public and pop them into github.

As an example, here’s a simple notebook I knocked up yesterday to demonstrate how we could created a composited image from a foreground image captured against a green screen, and a background image we wanted to place behind our foregrounded character.

The recipe was based on one I found in a Bryn Mawr College demo (Bryn Mawr is one of the places I look to for interesting ways of using Jupyter notebooks in an educational context.)

The demo works by looking at each pixel in turn in the foreground (greenscreened) image and checking its RGB colour value. If it looks to be green, use the corresponding pixel from the background image in the composited image; if it’s not green, use the colour values of the pixel in the foreground image.

The trick comes in setting appropriate threshold values to detect the green coloured background. Using Jupyter notebooks and ipywidgets, it’s easy enough to create a demo that lets you try out different “green detection” settings using sliders to select RGB colour ranges. And using mybinder, it’s trivial to share a copy of the working notebook – fire up a container and look for the Green screen.ipynb notebook: demo notebooks on mybinder.


(You can find the actual notebook code on github here.)

I was going to say that one of the things I don’t think you can do at the moment is share a link to an actual notebook, but in that respect I’d be wrong… The reason I thought was that to launch a mybinder instance, eg from the psychemedia/ou-tm11n github repo, you’d use a URL of the form; this then launches a container instance at a dynamically created location – eg http://SOME_IP_ADDRESS/user/SOME_CONTAINER_ID – with a URL and container ID that you don’t know in advance.

The notebook contents of the repo are copied into a notebooks folder in the container when the container image is built from the repo, and accessed down that path on the container URL, such as http://SOME_IP_ADDRESS/user/SOME_CONTAINER_ID/notebooks/Green%20screen%20-%20tm112.ipynb.

However, on checking, it seems that any path added to the mybinder call is passed along and appended to the URL of the dynamically created container.

Which means you can add the path to a notebook in the repo to the notebooks/ path when you call mybinder – – and the path will will passed through to the launched container.

In other words, you can share a link to a live notebook running on dynamically created container – such as this one – by calling mybinder with the local path to the notebook.

You can also go back up to the Jupyter notebook homepage from a notebook page by going up a level in the URL to the notebooks folder, eg .

I like mybinder a bit more each day:-)

Querying Panama Papers Neo4j Database Container From a Linked Jupyter Notebook Container

A few weeks ago I posted some quick doodles showing, on the one hand, how to get the Panama Papers data into a simple SQLite database and in another how to link a neo4j graph database to a Jupyter notebook server using Docker Compose.

As the original Panama Papers investigation used neo4j as its backend database, I thought putting the data into a neo4j container could give me the excuse I needed to start looking at neo4j.

Anyway, it seems as if someone has already pushed a neo4j Docker container image preseeded with the Panama Papers data, so here’s my quickstart.

To use it, you need to have Docker installed, download the docker-compose.yaml file and then run:

docker-compose up

If you do this from a command line launched from Kitematic, Kitematic should provide you with a link to the neo4j database, running on the Docker IP address and port 7474. Log in with the default credentials ( neo4j / neo4j ) and change the password to panamapapers (all lower case).

Download the quickstart notebook into the newly created notebooks directory, and you should be able to see it from the notebooks homepage on Docker IP address port 8890 (or again, just follow the link from Kitematic).

I’m still trying to find my way around both the py2neo Python wrapper and the neo4j Cypher query language, so the demo thus far is not that inspiring!

And I’m not sure when I’ll get a chance to look at it again…:-(

OpenRobertaLab – Simple Robot Programming Simulator and UI for Lego EV3 Bricks

Rather regretting not having done a deep dive into programming environments for the Lego EV3 somewhat earlier, I came across the inspired OpenRobertaLab (code, docs) only a couple of days ago.


(Way back when , in the first incarnation of the OU Robotics Outreach Group, we were part of the original Roberta project which was developing a European educational robotics pack, so it’s nice to see it’s continued.)

OpenRobertaLab is a browser accessible environment that allows users to use blocks to program a simulated robot.


I’m not sure how easy it is to change the test track used in the simulator? That said, the default does have some nice features – a line to follow, colour bars to detect, a square to drive round.

The OU Robotlab simulator supported a pen down option that meant you could trace the path taken by the robot – I’m not sure if RobertaLab has a similar feature?


It also looks as if user accounts are available, presumably so you can save your programmes and return to them at a later date:


Account creation looks to be self-service:


OpenRobertaLab also allows you to program a connected EV3 robot running leJOS, the community developed Java programming environment for the EV3s. It seems that it’s also possible to connect to a brick running ev3dev to OpenRobertaLab using the robertalab-ev3dev connector. This package is preinstalled in ev3dev, although it needs enabling (and the brick rebooting) to run. ssh into the brick and then from the brick commandline, run:

sudo systemctl unmask openrobertalab.service
sudo systemctl start openrobertalab.service

Following a reboot, the Open Robertalab client should now automatically run and be available from the OpenRobertaLab menu on the brick. To stop the service / cancel it from running automatically, run:

sudo systemctl stop openrobertalab.service
sudo systemctl mask openrobertalab.service

If the brick has access to the internet, you should now be able to simply connect to the OpenRobertalab server (

Requesting a connection from the brick gives you an access code you need to enter on the OpenRobertaLab server. From the robots menu, select connect...:


and enter the provided connection code (use the connection code displayed on your EV3):


On connecting, you should hear a celebratory beep!

Note that this was as far as I got – Open Robertalab told me a more recent version of the brick firmware was available and suggested I installed it. Whilst claiming I may still be possible to run commands using old firmware, that didn’t seem to be the case?

As we well as accessing the public Open Robertalab environment on the web, you can also run your own server. There are a few dependencies required for this, so I put together a Docker container psychemedia/robertalab (Dockerfile) containing the server, which means you should be able to run it using Kitematic:


(For persisting things like user accounts, and and saved programmes, there should probably be a shared data container to persist that info?)

A random port will be assigned, though you can change this to the original default (1999):


The simulator should run fine using the IP address assigned to the docker machine, but in order to connect a robot on the same local WiFi network to the Open RobertaLab server, or connect to the programming environment from another computer on the local network, you will need to set up proter forwarding from the Docker VM:


See Exposing Services Running in a Docker Container Running in Virtualbox to Other Computers on a Local Network for more information on exposing the containerised Open Robertalab server to a local network.

On the EV3, you will need to connect to a custom Open Robertalab server. The settings will be the IP address of the computer on which the server is running, which you can find on a Mac from the Mac Network settings, along with the port number the server is running on:

So for example, if Kitematic has assigned the port number 32567, and you didn’t otherwise change it, and you host computer IP address is, you should connect to: from the Open Robertalab connection settings on the brick. On connecting, you will be presented with a pass code as above, which you should connect to from your local OpenRobertaLab webpage.

Note that when trying to run programmes on a connected brick, I suffered the firmware mismatch problem again.

Exposing Services Running in a Docker Container Running in Virtualbox to Other Computers on a Local Network

Most of my experiments with Docker on my desktop machine to date have been focused on reducing installation pain and side-effects by running applications and services that I can access from a browser on the same desktop.

The services are exposed against the IP address of the virtual machine running docker, rather than localhost of the host machine, which also means that the containerised services can’t be accessed by other machines connected to the same local network.

So how do we get the docker container ports exposed on the host’s localhost network IP address?

If docker is running the containers via Virtualbox in the virtual machine named default, it seems all we need to do is tweak a couple of port forwarding rules in Virtualbox. So if I’m trying to get port 32769 on the docker IP address relayed to the same port on the host localhost, I can issue the following terminal command if the Docker Virtualbox is currently running:

VBoxManage controlvm "default" natpf1 "tcp-port32769,tcp,,32769,,32769"

which has syntax:

natpf<1-N> [<rulename>],tcp|udp,[<hostip>], <hostport>,[<guestip>],<guestport>

Alternatively, the rule can be created from the Network – Port Forwarding Virtualbox  settings for the default box:


To clear the rule, use:

VBoxManage controlvm "default" natpf1 delete "tcp-port32769"

or delete from the Virtualbox box settings Network – Port Forwarding rule dialogue.

If the box is not currently running, use:

VBoxManage modifyvm "default" --natpf1 "tcp-port32769,tcp,,32769,,32769"
VBoxManage modifyvm "default" --natpf1 delete "tcp-port32769"

The port should now be visible and localhost:32769 and by extension may be exposed to machines on the same network as the host machine by calling the IP address of the host machine with the value of the forwarded port on host.

On a Mac, you can find the local IP address of the machine from the Mac’s Network settings: