Safety First AI…?

A conjunction in my feeds…

Ish via Stephen Abram’s Library Lighthouse, a link to an analysis of the “unknown/unknowns” model (Characterizing unknown/unknowns that turns up this handy tabular description of it:

And another similar model from 1950s therapeutic psychology known as the Johari window (Framework of the Day: Known Unknowns):

And then also, via a couple of Google blogs feeds, a post on Safety-first AI for autonomous data center cooling and industrial control. My gut reaction was “who defines safety?”, assuming that what Google really want to do is minimise costs/maximise efficiency whilst not trashing the machines – i.e. machine safety first (with human and environmental externalities not really considered…). But my second thought was then: hmmm, what about incomplete models… ? Given available training datasets, and the evolution of models based on available training data (and the biases associated with it), how do AI models react to “unknown unknowns”, and might they actually be identifiable by considering the (state?) space coverage afforded by the training set(s)?

A naive search turns up a recent (unrefereed) preprint from a week ago – Unknown Examples & Machine Learning Model Generalization – and there are probably far better references, such as Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration. Anyway, finding a go-to paper on this is something I’ll add to my ever expanding ‘to do’ list. (I just need to start making time to read more; cutting out Netflix boxsets is helping with this, but the pile just keeps getting bigger…:-(

PS By the by, the mistitled(?) Harper’s Magazine essay by James Bridle from his New Dark Age book – Known Unknowns – provides some nice background setting… Bridle recalls an oft told story (eg Neural Network Follies) of how the training neural networks to identify tanks from a set a photos turned out to classify the wrong thing. For an (unsuccessful) attempt to track down the origins of the story, see Gwern Branwen’s The Neural Net Tank Urban Legend; this piece also includes link to other ‘machine-got-it-wrong examples that are referenced in the literature. For more machine anecdotes with some basis in the literature, see this collection of “oh…oops…” stories around artificial evolution: The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.