Perplexity -Brainstorming Research Questions
Submitter: Marc Watkins, U of Mississippi
——————————————————
The experiment:
Large language models tend to hallucinate and make up information because they lack a mechanism for distinguishing fact from fiction. They simply generate text based on their training data and the prompt. As a result, LLMs can seem both impressively capable and surprisingly ignorant. More specialized interfaces constrain LLMs’ tendencies toward fabrication by fine-tuning them on academic tasks and incorporating human feedback. Focusing LLMs on narrow applications like retrieval and summarization rather than open-ended reasoning also promotes more accurate outputs. Certain developers leverage these techniques to create LLMs that assist researchers with finding, synthesizing, and summarizing real sources.
In the fall of 2022, I piloted an app called Elicit with my students to see if generative AI could help them navigate dense research articles. For years my first-year writing students complained about the poorly designed user interface of our university’s academic database. It requires specialized knowledge, uses a clunky UI, and often frustrates students more than helps them.
The current tool I use for research with my students is Perplexity. Like Elicit, Perplexity uses natural language to allow students to explore research questions. However, unlike Elicit, the tool offers the user a more concise synthesis of sources and uses a mix of academic and pop culture sources in response, often providing a more holistic overview of a topic.
Results:
Students responded well to the interface and the process of using their own language to search for material appeared to increase their engagement in the research process. The speed of automated summarizing and synthesizing of sources and coherently relating them to your topic is clearly a use case many students welcome. However, I encouraged my students to pause in their overall research and take into account what agency they were giving over to an algorithm and how much trust they were putting into automated summarizes that so neatly articulated a source’s intent without requiring a reader going through the process of reading the actual sources closely.
I’ve considered changing future versions of the assignment by establishing more opportunities for students to work off-line with research. Even the simple act of printing out an article to then read and annotate it presents a powerful balance that could potentially help students engage with research. Yet, the deceptively simple process of close-reading a source and highlighting material may be cognitively challenging to underprepared students and those with certain disabilities. We try to accommodate a range of factors, but intentionally limiting student engagement with new technology out of fear of overuse may deskill them in favor of spending time with established technologies, and may not ensure students receive useful help or instruction from either.
Relevant resources: https://docs.google.com/document/d/1azVdGY7FWR0FEdjeXj-v-PTme7q-pp9D/edit?usp=sharing&ouid=106137383630710974014&rtpof=true&sd=true
Contact: mwatkins[AT]olemiss[DOT]edu

Leave a Reply