Exploring Early Text Generators through Remix and Modification

Submitter: Nick Montfort, MIT Center for Digital Narrative, U of Bergen

——————————————————

The experiment:

In many different teaching contexts, I’ve had learners with no programming experience modify my modern-day reimplementations of early text generators.

These include Love Letters (Christopher Strachey, 1953), Stochastic Texts (Theo Lutz, 1959), Random Sentences (Victor H. Yngve, 1961), and The House of Dust (Allison Knowles and James Tenney, 1967).

If these seem remote from AI, that reflects a loss of historical awareness about AI and generative text. Strachey, who described his program in an article, “The ‘thinking’ machine,” thought his system bore on questions of intelligence. Other text-generating systems were provocative because of their odd syntactical constructions and connection to existing, identifiable texts. These early interventions do relate to recent LLM-based creative work.

What students modify is a newly-written program that functions in the same way as these early ones, provided in both Python and JavaScript. These do not restage the material properties of these early systems, with all-uppercase printed output, nor do they follow the way the original programs were internally organized. My reimplementations fit on a page or two.

I have students select one generator and spend 15-20 minutes modifying it during class (or workshop) time. After we are done, we go around and read the resulting output, which is being generated live, aloud. I’ve done this with undergraduates, grad students, faculty, students at an alternative art school, and rappers, among others.

Results:

Particular student work can be better or worse, but within each group, in every case, there is always a lively diversity of approaches that we all hear. So students discover that a short bit of code has a wide expressive range, even when someone who doesn’t know how to program begins to alter the strings that serve as data.

One downside to this approach is that one can only go so far as a creative coder if learning is done exclusively by remix and modification. Eventually a programmer needs to understand what variables and functions are, how to use the conditional and how to loop over a sequence, and other fundamentals. But this modification work can be a starting point that awakens interest. It’s the way I start in my book Exploratory Programming for the Arts and Humanities, published by the MIT Press in print and in a digital open access edition.

Another difficulty is that although these generators do represent AI as historically understood, they seem remote (and are remote!) from today’s deep learning systems. Rather than forget about history, my remedy for this problem is to try to explicitly connect this early work with current AI, and to do so thoroughly and systematically. With Lillian-Yvonne Bertram, I am co-editor of the forthcoming Output: An Anthology of Computer-Generated Text, 1953–2023 (MIT Press, slated for Fall 2024), which I plan to have students read so they can trace the development of text generation systems.

Relevant resources: https://nickm.com/ep2e/

Contact: nickm[AT]nickm[DOT]com

Leave a Reply

Your email address will not be published. Required fields are marked *

*