Prompt Engineering, aka Generative System Query Optmisation

Over the last few months, I’ve seen increasing numbers of references to prompt engineering in my feeds. Prompt engineering is to generative AI systems — thinks like GPT-3 (natural langage/text generation), CoPilot (code and comment (i.e. code explanation) generation), and DALL-E (text2image generation) — as query optimisation is to search engines: the ability to iterate a query/prompt, in light of the responses it returns, in toder to find, or generate, a response that meets your needs.

Back in the day when I used to bait librarians, developing web search skills was an ever present theme at the library-practitioner conferences I used to regularly attend: advanced search forms and advanced search queries (using advanced search/colon operators, etc.), as well a power search tricks based around URL hacking or the use of particular keywords that reflected your understanding of the search algorithm, etc.

There was also an ethical dimension: should you teach particular search techniques that could be abused in some way, if generally know, but that might be appropriate if you were working in an investigative context: strategies that could be used to profile — or stalk — someone, for example.

So now I’m wondering: are the digital skills folk in libraries starting to look at ways of developing prompt engineering skills yet?

Again, there are ethical questions: for example, should you develop skills or give prompt exampes of how to create deepfakes in unfiltered generative models such as Stable Diffusion (Deepfakes for all: Uncensored AI art model prompts ethics questions, h/t Tactical Tech/@info_activism)?

And there are also commercial and intellectual property right considerations. For example, what’s an effective prompt worth and how do you protect it if you are selling it on a prompt marketplace such as PromptBase? Will there be an underground market in “cheats” that can be deployed against certain commercial generative models (because pre-trained models are now also regularly bought and sold), and so on.

There’s also the skills question of how to effectively (and iteratively) work with a generative AI to create and tune meaningful solutions, which requires the important skill of evaulating, and perhaps editing or otherwise tweaking, the responses (cf. evaluating the quality of websites returned in the results of a web search engine) as well as improving or refining your prompts (for example, students who naively enter a complete question as a search engine query or a generative system prompt compared to those who use the question as a basis for their own, more refined, query, that attemtps to extract key “features” from the question…)

For education, there’s also the question of how to respond to students using generative systems in open assessments: will we try to “forbid” students to such systems at all, will we have the equivalent of “calculator” papers in maths exams, where you are expected, or evern required, to use the generative tools, or will we try to enter into an arms race with questions which, if naively sumbitted as a prompt, will return an incorrect answer (in which case, how soon will the generative systems be improved to correctly respond to such prompts…)

See also: Fragment: Generative Plagiarism and Technologically Enhanced Cheating?

PS For some examples of prompt engineering in practice, Simon Willison has blogged several examples of some of his playful experiments with generative AI:

PPS another generative approach that looks interesting is provided by, which allows you to submit several training images of a novel concept, and then generate differently rendered styles of that concept.

Author: Tony Hirst

I'm a Senior Lecturer at The Open University, with an interest in #opendata policy and practice, as well as general web tinkering...

%d bloggers like this: