AI for Editing

Submitter: Nupoor Ranade, George Mason U

——————————————————

The experiment:

I prepared an assignment to help students experiment with AI tools and create meaningful discussions about “human-in-the-loop” when using AI technologies, and review their limitations in technical communication settings.

As leading organizations in the generative AI space are working to help solve the problem of AI “hallucinations”, we can safely say that the debates about whether AI will replace writers’ jobs are behind us. As AI tools are increasingly integrated into products, experts in various domains are required to work alongside these systems to address potential false or biased information to which AI systems are prone. So, while tools like Grammarly seem efficient at editing, students preparing for such roles will have to develop critical skills to question decision-making of AI editing tools. Human editors’ roles to review AI generated content for inaccuracies and to ensure that AI models are trained on diverse, balanced and well-structured data have become more important than ever.

The assignment required students to: 1) generate a short essay using AI software, 2) edit it manually using editing principles (such as clarity, conciseness, accuracy of grammar and punctuation, and sentence formation) and 3) improve the readability score of the essay’s content by reducing the Flesch-Kincaid Grade Level.The primary goal was to broaden and deepen students’ perspectives about working alongside AI and reflecting on their own intellectual contributions in editing roles.

Results:

The experiment was successful for many reasons:

Students were able to experiment with AI tools if they hadn’t already. The assignment was introduced in Fall 2022 before the release of ChatGPT, when generative AI users were relatively low. This assignment gave them students an opportunity to explore various tools and algorithms and discuss their process of exploration.

Students were able to point out the limitations of AI generative technologies in strategic ways. For example, they were able to utilize their knowledge of the various elements of the rhetorical situation such as the genre, audience, writer, purpose, and context to analyze whether the response generated by AI to their prompt was a fitting response or not. This holistic understanding helped them identify the missing or inaccurate elements of the content development process that challenges use of AI tools.

Students came to realize the difference between their understanding of the audience (using readability score) in comparison to AI’s. While AI develops a one-size-fit-all content, editors demonstrate an in-depth understanding of audience needs. Thus editors function as humans-in-the-loop to ensure the best interests of writers and their audience.

In the next iteration of this class, I would like to include a component that allows discussions on AI augmentation in terms of writing/editing expertise, authoritativeness, audience trustworthiness, and ethical concerns such as justice, equity, privacy, and access.

Relevant resources: Ranade, N. (2023). AI for editing. In A. Vee, T. Laquintano, & C. Schnitzler (Eds.), TextGenEd: Teaching with Text Generation Technologies. The WAC Clearinghouse. https://doi.org/10.37514/TWR-J.2023.1.1.02

Contact:

Leave a Reply

Your email address will not be published. Required fields are marked *

*