Robot Workers?

A lazy post that does nothing much more than rehash and link bullet points from the buried lede that is someone else’s post…

It seems like folk over at the Bank of England have been in the news again about robots taking over human jobs (Bank of England chief economist [Andy Haldane] warns on AI jobs threat); this follows on from a talk earlier this year by Mark Carney at the Public Policy Forum in Toronto [slides] and is similar in kind to other speeches coming out of the Bank of England over the last few years (Are Robots Threatening Jobs or Are We Taking Them Ourselves Through Self-Service Automation?).

The interview(?) was presumably in response to a YouGov survey on Workers and Technology: Our Hopes and Fears associated with the launch of a Fabian Society and Community Commission on Workers and Technology.

(See also a more recent YouGov survey on “friends with robots” which asked “In general, how comfortable or uncomfortable do you think you would be working with a colleague or manager that was a robot?” and “Please imagine you had received poor service in a restaurant or shop from a robot waiter/ shop assistant that is able to detect tone and emotion in a human’s voice… Do you think you would be more or less likely to be rude to the robot, than you would to a human waiter/ shop assistant, or would there be no difference? (By ‘rude’, we mean raising your voice, being unsympathetic and being generally impolite… )“.)

One of the job categories that is being enabled by automation is human trainers that help generate the marked up data that feeds the machines. A recent post on The Lever, “Google Developers Launchpad’s new resource for sharing applied-Machine Learning (ML) content to help startups innovate and thrive” [announcement] asks Where Does Data Come From?. The TLDR answer?

  • Public data
  • Data from an existing product
  • Human-in-the-loop (e.g. a human driver inside an “autonomous” vehicle)
  • Brute force (e.g. slurping all the data you can find; hello Google/Facebook etc etc)
  • Buying the data (which means someone is also selling the data, right?)

A key part of many machine learning approaches is to use labelled datasets that the machine learns from. This means taking a picture of a face, for example, that a human has annotated with areas labelled “eyes”, “nose”, “mouth”, and then training the ‘pootah to try to identify particular features in the photographs that allow the machine to identify those labels with those features, and hopefully the corresponding elements in a previously unseen photo.

Here’s a TLDR summary of part of the Lever post, concerning where these annotations come from:

  • External annotation service providers
  • Internal annotation team
  • Getting users to generate the labels (so the users do folk in external annotation service providers out of a job…)

The post also identifies several companies that provide external annotation services… Check them out if you want to get a glimpse of a future of work that involves feeding a machine…

  • Mechanical Turk: Amazon’s marketplace for getting people to do piecemeal bits of work for pennies that other people often sell as “automated” services, which you might have though meant “computerised”. Which it is, in that a computer assigns the work to essential anonymous, zero hours contract workers. Where it gets really amusing is when folk create bots to do the “human work” that other folk are paying Mechanical Turk for
  • Figure Eight: a “Human-in-the-Loop Machine Learning platform transforms unstructured text, image, audio, and video data into customized high quality training data”… Sounds fun, doesn’t it? (The correct answer is probably “no”);
  • Mighty AI: “a secure cloud-based annotation software suite that empowers you to create the annotations you need for developing and validating autonomous perception systems”, apparently.. You get a sense of how it’s supposed to work from the blurb:
  • “Mighty Community”, a worldwide community that provides our customers with timely, high-quality annotations, offloading the burden to find, train, and manage a pool of annotators to generate ground truth data.
  • Expert global community allows for annotations 24 hours/day
  • Training on Mighty Tools eliminates annotator on-boarding time
  • Available at a moment’s notice to instantly scale customer annotation programs
  • Community members covered by confidentiality agreement
  • Automated annotation management process with Mighty Studio
  • Close integration with Mighty Quality eliminates the need to find and correct annotation errors
  • Playment: “With 300,000+ skilled workers ready to work round-the-clock on Playment’s mobile and web app, we can generate millions of labels in a matter of hours. As the workers do more work, they get better and Playment is able to accomplish much more work in lesser time.” (And then when they’ve done the work, the machine does the “same” work with reduced marginal cost… Hmm, thinks, how do the human worker costs (pennies per task) compare with the server costs for large ML services?)

Happy days to come, eh…?

PS see also Amazon SageMaker Ground Truth – Build Highly Accurate Datasets and Reduce Labeling Costs by up to 70%. Maybe also In the Coming Automated Economy, People Will Work for AI — A new role for humans: prepping data so AI can learn to do our jobs.

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

%d bloggers like this: