Persona Games: Writing To and From, For and Against Language Models

Submitter: Jeremy Douglass, U of California, Santa Barbara

——————————————————

The experiment:

Persona Games use personas, roles, and audiences to situate language models within richly imagined writing ecosystems. Students may play an existing persona game–or invent their own! The general scheme is 1. describe the network, 2. select a role, 3. write personas, 4. competitive (or collaborative) critical evaluation. Students first describe a specific writing activity as an actor-network of roles (e.g. assignment designer, outliner, peer reviewer, project lead, writing tutor, resource specialist, grader, et cetera), then choose one role to delegate to a large language model (LLM). They collaboratively or competitively engage in LLM persona writing (i.e. role prompting) for that role. Next they interact with the LLM persona(s) to evaluate not just output quality, but also capacities, gaps in how it effects that role, and limits in the role’s original conception. Each activity is given a “persona game” name.

For example, in the adversarial “Hallucinating Expert” student teams each train an LLM expert on a text (e.g. “Landscape with the Fall of Icarus”) before attempting to trick other experts into AI hallucinations. In the collaborative “Hiring Mary Poppins” students persona-engineer an ideal writing tutor for draft feedback on specific assignment x. Iterative evaluation of this ‘tutor’ helps students explore the capacities and limits of LLM tutoring, the uses of feedback itself, and the relationship of feedback to the concept of a writing assignment.

Results:

Initial experiments in the context upper-division English literature courses and workshops in Winter 2023 began in a more freeform and exploratory mode before focusing on persona engineering as a critical mode for students to articulate, explore, and test their ideas about writing and writing roles. Live sessions with either one shared LLM screen and a typing hot-seat worked well, as did small group activities with 3-5 students per session (although sharing these using e.g. ChatGPT was tricky and required advanced planning). Sessions ran fairly long, with it taking time to define and iterate an LLM persona prompt, let alone develop tests for it–things that helped included prepping template text for students to edit/elaborate as well as using a shared doc for class collaborative writing. Students seemed to appreciate the conceptual aspect, with hands-on activity beginning in “How do we make this work?” and moving toward “How do we define good writing? What is feedback? What is this assignment about, and why?”

Contact: jeremydouglass[AT]gmail[DOT]com

Leave a Reply

Your email address will not be published. Required fields are marked *

*