Reading and Researching with LLMs

Submitter: Mark C. Marino, U of Southern California

——————————————————

The experiment:

To test out AI-augmented research and reading, we tried several LLMs that are tied to search. New tools and features pop up each day, but we used Perplexity, Elicit, and Consensus for search. Elicit and Consensus are tied to research databases, like Semantic Scholar, so they have access to peer-reviewed scholarship. In Part 1, students conducted searches under their research topics using these tools, which include synthesis of results and machine-processed summaries.

In Part 2, we used LLMs to analyze scholarly articles. First we uploaded individual and collections of articles and asked the LLM to give us summaries, takeaways, strengths, and weaknesses. We also asked the LLM to compare the stances of the articles. Finally, we asked the LLM for formal qualities of the articles: in which sections they cited sources, typical sentence styles, level of diction, et cetera. Then, students wrote a paper reflecting on whether these tools illuminated or obfuscated aspects of the research.

Results:

There was a rather big gap between student’s performative responses and their “off the record responses.” In their essays, they championed human reading over machine reading, saying they were much better equipped to analyze the articles on their own than using the software as a lens. Out of the classroom, they reported finding the tools to be immensely helpful in processing large collections of texts.

While the software was good at summarizing social science reports of surveys, it was less effective on humanities-style writing. Social science writing is designed to be easily processed, whereas the humanities can lean into the subtleties and nuances of artful argumentation. When I modeled analyzing formal conventions of articles using the LLMs in a simplified version of machine reading, they found the tools helpful. Also, some found the augmented search tools to be very helpful in parsing the many results they would get on a Google Scholar or Proquest search.

Relevant resources:

Contact:

Leave a Reply

Your email address will not be published. Required fields are marked *

*