Testing Bias in Google Search

Submitter: Anna Mills, Cañada C

——————————————————

The experiment:

As an exercise in critical AI literacy and information literacy, we probed Google search autopredict for bias. The exercise served as a bridge between previous discussions of large language models such as ChatGPT and the more familiar predictive text in search. First the students did homework assignments collaboratively annotating the introduction to Safiya Umoja Noble’s book Algorithms of Oppression and Janelle Shane’s explanation of large language models, “Let an Algorithm Choose Your Halloween Costume.”

Then in class we watched a brief video of Noble explaining her research. We discussed how search behaves differently than when she first wrote the book, probably in part because of her advocacy. Projecting my laptop screen, I demonstrated a few search stems Noble had tried, such as “Why are black girls so…” and “black girls.” I invited the class to suggest search stems they wanted to test and blended testing and discussion for fifteen minutes or so before splitting into groups where students performed their own search tests and reported back.

Results:

Students seemed very engaged. There was laughter and sometimes outrage in the room. “Why are athletes so” resulted in some stereotypes like “dumb” and “hot” For “why are black people so,” the search refused to suggest any completions. I pointed out that Google provides no information about how it decides which completions to allow and connected this to Noble’s critique of Google’s power and lack of transparency. Students suggested “Why are Latinas so…” which resulted in some stereotypica, offensive phrases like “hard to date,” “hot,” and “attractive.” “Why are Samoans so…” resulted in stereotypes like “big” and “fat.” The group discussions were not as lively as I had hoped; I think I would assign students to do some probing for homework next time so they can focus and investigate on their own and then share back only the most interesting results in screenshots and reflections. Next time I would screenshare Google’s process for reporting an inappropriate suggestion.

I’m still dissatisfied because this activity didn’t help us understand the role of advertising and the profit motive in Google’s algorithm, one of the key ideas in Noble’s book. Students probably left with the impression that Google search predictions and results transparently reflected frequency of user queries. I have tried but have not been able to figure out whether ads still coordinate with stereotyped search predictions and if so, how to best showcase that coordination.

Relevant resources:

Contact:

Leave a Reply

Your email address will not be published. Required fields are marked *

*