For our first WiSci meeting of Fall 2016, we were lucky to have Didem Sarikaya lead us through examining our explicit and implicit biases.
We first discussed the distinction between explicit and implicit biases. We brainstormed axes of discrimination that people may face and ended up with the following: gender, sex, race, age, religion, orientation, disability, income/class, nationality/region, education, physical appearance, language/dialect/accent, family, and profession. Biases based on these axes can be either explicit or implicit. An easy way to distinguish between explicit and implicit biases is to think about intent: explicit biases involve active thoughts while implicit biases tend to involve a lack of thought. For example, if a landlord directly refused to rent to a dog-owner, that would be an explicit bias. But, if a landlord was open to renting to dog owners but never actually did so because they felt that dog-owners are dirty, that would be implicit. Other examples of implicit bias that we discussed included a lack of accessibility in academic buildings and labs, the way pregnancy might be dealt with during hiring decisions, and gender differences in undergraduate teaching evaluations.
Next, Didem led us through a test for implicit bias. For readers who want to get a sense of how these tests work, you can take them online at https://implicit.harvard.edu/implicit/takeatest.html) We did a classroom version of the test developed by Keith Maddox and Samuel Sommers at Tufts University to detect race-based bias, and we did not do well. It was pretty uncomfortable but it forced us to confront the implicit biases that many of us carry. Afterwards, Didem showed us statistics that show that most Americans who take these types of test show bias along various axes of discrimination.
Distribution of scores implicit association test results from 2000 to 2006. Dark bars represent faster response to African American names with unpleasant adjectives and European American names with pleasant adjectives, and gray bars represent faster response to European American names with unpleasant adjectives and African American names with pleasant adjectives (retrieved from: https://implicit.harvard.edu/implicit/demo/background/raceinfo.html).
We turned to thinking about how biases can shape the way we label ourselves and others. We wrote down a few labels that have been applied to us that we did not like. We then took our labels and mingled with each other, discussing the labels that we did not like for ourselves and attempting to trade for labels that we liked better. The labels that I saw included things like “aggressive”, “sweet”, “unapproachable”, “pretty”, and “small”. A theme that emerged was the importance of the context of these types of labels — while a comment like “school-teacher voice” may sound benign, it can actually be fairly hurtful to someone who is trying to develop their “professor voice.” Similarly, words like “nice” and “sweet” can undermine a young scientist’s confidence in their intellectual abilities and feel demeaning.
We ended the workshop by briefly discussing how we could take what we’d learned and use it to make our own communities more inclusive. Suggestions from the group included making an effort to be careful about the language we use when talking to and about others or when writing reference letters and acknowledging when we make mistakes. In addition, we generally agreed that implicit biases are often based on not knowing people from underrepresented groups across various axes of discrimination, so increasing the number of people in our communities from underrepresented groups is essential.
For more information, please refer to Ambika Kamath’s excellent blog post on how to lead workshops on making academia more friendly to underrepresented groups. It served as the main source of inspiration for this session, and is a great resource for anyone interested in organizing similar activities:
— Emily Josephs