Terms of Engagement With the OpenAI API

Please note: quote formatting on this page is broken because WordPress craps up markdown styles when you edit a page. That is not AI, just crap code.

Remembering a time when I used to get beta invites on what seemed like a daily basis, I’ve just got my invite fot the OpenAI API beta, home of the text generating GPT-3 language model, I notice the following clauses in the terms and conditions.

First up, you must agree not to attempt to steal the models:

> (d) You will not use the APIs to discover any underlying components of our models, algorithms, and systems, such as exfiltrating the weights of our models by cloning via logits.

Second, no pinching of the data:

> (e) You may not use web scraping, web harvesting, or web data extraction methods to extract data from the APIs, the Content, or OpenAI’s or its affiliates’ software, models or systems.

Third, societal harm warnings:

> (i) You will make reasonable efforts to reduce the likelihood, severity, and scale of any societal harm caused by your Application by following the provided Safety best practices. OpenAI may request information from you regarding your efforts to reduce safety risks, and such information may be used to assess compliance with these Terms as well as to inform improvements to the API.
>
> (j) You will not use the APIs or Content or allow any user to use the Application in a way that causes societal harm, including but not limited to:
>
> – (i) Misleading end users that Application outputs were human-generated for generative use cases that do not involve a human in the loop;
> – (ii) Generating spam; and > – (iii) Generating content for dissemination in electoral campaigns.

The safety best practices include thinking like an adversary (for example, “[b]rainstorm the uses of your product you would be most concerned with – and importantly, how you might notice if these were happening”),  filtering sensitive and unsafe content, eg using OpenAI’s own content filter, and keeping a human in the loop to “ensure serious incidents are addressed and can set appropriate expectations of what is handled by the AI vs. handled by a human”:

> Indicate clearly what is performed by an AI vs. handled by a human within your application, particularly in initial user interactions.
>
> Disclose any uses for which your application is not suitable, due to a lack of a “human in the loop” (e.g., this product is not a suitable replacement to dialing 911 or other formal mechanisms).
>
> Require a human to manually authorize or otherwise act upon suggestions from the API, particularly in consequential circumstances. Users should generally not be creating automated workflows around the API without a human exercising judgment as part of the process.

A section on understanding safety and risk is also interesting:

> A common definition for safety in general is “Freedom from death, injury, occupational illness, damage to or loss of equipment or property, or damage to the environment.” For the API, we adopt an amended, broader version of this definition: >
> Freedom from conditions that can cause physical, psychological, or social harm to people, including but not limited to death, injury, illness, distress, misinformation, or radicalization, damage to or loss of property or opportunity, or damage to the environment.

The guidelines ‘fess up to the fact that ML components have limited robustness and “can only be expected to provide reasonable outputs when given inputs similar to the ones present in the training data” (i.e. they’re bigots who trade in stereotypes) and are subject to attack: “Open-ended ML systems that interact with human operators in the general public are susceptible to adversarial inputs from malicious operators who deliberately try to put the system into an undesired state”. (Hmmm. In some cases, “operators” might also consider the system itself to be adversarial to the (needs of) the operator?)

The question of bias explicity recognised: ML components are biased and “components reflect the values and biases present in the training data, as well as those of their developers”. If you never really think about the demographics of companies, and the biases they have, imagine the blokes in town on a Saturday night at club chucking out time. That. Them. And their peers who have no friends and aren’t invited on those nights out. Them too. That. ;-)

> Safety concerns arise when the values embedded into ML systems are harmful to individuals, groups of people, or important institutions. For ML components like the API that are trained on massive amounts of value-laden training data collected from public sources, the scale of the training data and complex social factors make it impossible to completely excise harmful values.

As part of the guidance, various harms are indentified, including but not limted to providing false information (in the sense of the system presenting “false information to users on matters that are safety-critical or health-critical”, although “intentionally producing and disseminating misleading information via the API is strictly prohibited”); perpetuating discriminatory attitudes (eg “persuading users to believe harmful things”, an admittance that the system may have the power to influence beliefs which should be filed away for use in court later?!); causing individual distress (such as “encouraging self-destructive behavior (like gambling, substance abuse, or self-harm) or damaging self-esteem”), incitement to violence (“persuading users to engage in violent behavior against any other person or group”) and causing physical injury, property damage, or environment damage (eg by  connecting the system to “physical actuators with the potential to cause harm, the system is safety-critical, and physically-damaging failures could result from unanticipated behavior in the API”). So that’s all good then… What could possibly go wrong? ;-)

The question of robustness is also considered in the sense of the system “reliably working as intended and expected in a given context”.  Failures might occur in (predictable, but) “unexpected ways due to, e.g., limited world knowledge”, including but not limited to “generation of text that is irrelevant to the context; generation of inaccurate text due to a gap in the API’s world knowledge; continuation of an offensive context; and inaccurate classification of text”. As a safeguard, the advice is to encourage human oversight and make the end-user responsible: “customers should encourage end-users to review API outputs carefully before taking any action based on them (e.g. disseminating those outputs)”. So when you send the kid on work experience out to work with your most valuable or vulnerable clients, if the kid messes up, it’s your client fault for not not listening to them. Keep testing is also recommended, not least because . That naive new graduate you’ve just taken onto the graduate training scheme? They have am identical twin who occasionally steps in to cover for them, but you don’t need to know that, right, so just keep an eye out if they start behaving differently oddly to how they usually behave.

And finally, fairness, in the sense of not having “degraded performance for users based on their demographics”, or producing text “that is prejudiced against certain demographic groups”, all of which is your fault (you are repsonsible for the actions on your employees, etc., aka vicarious liability?): “API customers should take reasonable steps to identify and reduce foreseeable harms associated with demographic biases in the API”. As mitigation, characterize fairness risks before deployment and try to “identify cases where the API’s performance might drop”, noting also that filtration tools can help but aren’t panaceas.

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

%d bloggers like this: