Coded Bias: Exploring Bias in Research and Its Unintended Consequences

by Dr. Denise M. Driscoll

At the CISTAR April 2021 Annual Meeting, the Diversity and Culture of Inclusion pillar collaborated with the Student Leadership Council and the Education pillar to facilitate a workshop based on the 2020 film Coded Bias, directed by Shalini Kantayya, that documents bias in algorithms.  This session builds on the previous workshop from the CISTAR Biannual Meeting in October on Recognizing and Responding to Implicit Bias

Specifically, the film tells the story of Joy Buolamwini, a researcher at M.I.T., who found that her darker-skinned face was unable to be detected by facial-recognition software that she purchased.  The film goes on to describe how machine-learning algorithms lead to all kinds of biases that perpetuate societal inequalities and negatively impact lives.   

Text Box: Figure from the film Coded Bias

After some introductory points about the many types of biases documented by social cognition and cognitive research in psychology (i.e., confirmatory bias, biased attributions), we watched a short 3-minute excerpt from the Coded Bias film focused on understanding bias in algorithms and, specifically, how facial recognition software is worse at not only detecting darker vs. lighter skinned faces, but female vs. male faces. 

Following a few words from our director about the importance of challenging yourself to pay attention to bias in research, we went into breakout rooms to talk about reactions to the film, to share other types of biases in science or engineering that they’ve heard about, and to brainstorm about any way bias has, or might, creep into the types of research that CISTAR does. 

We then reconvened to watch a second, eight minute segment about the often unintended consequences of bias. Having made the point already about facial recognition software having bias, the film demonstrated the ubiquitous nature of bias creeping into all aspects of our lives—from how teachers are evaluated to what one’s credit score is.  We returned to breakout rooms to talk about other unintended consequences or harm from bias, and to talk about what their use means for our civil rights and liberties. 

In responding to a follow-up survey, participants had varied reactions, but what was consensual was the emotional tone of their comments, using words about the film such as shocking, surprising, eye-opening, and worrisome.  Most participants also thought it had been a productive conversation about bias in their research, although a few participants reported having difficulty in pinpointing bias in what they do (and, admittedly, there may be no bias to find in their case).

In closing, people often have the false belief that computers are somehow less error-prone than humans, but it simply isn’t the case.  Often what one is doing by relying on algorithms is perpetuating bias.

The session ended with the challenge for everyone to grow in their awareness of the impact of bias on everyday thoughts, decisions, and judgments, to consider one’s research from the perspective of who might not be ‘in the room’ or to think about how aspects of their research would be viewed by someone with expertise not already represented on the research team.  This type of “other perspective-taking” thinking has been shown to help so that ‘bias creep prevention’ becomes a part of your everyday research practice. 

If interested in having the PowerPoint slides, please email me at driscoll@purdue.edu, and I’d be happy to share them.  Also, if interested in watching the film in its entirety, now it can be found on Netflix. Finally, there are several related recommended documentaries that can be watched for free at: https://tree.northwestern.edu/news/2020-06-17-documentary-screenings.

Photo 1: Screen shot from the documentary Coded Bias, showing how the facial recognition software recognized the researchers face once she covered her face with the white mask.