Home » 2017 » March

Monthly Archives: March 2017

Behavioral insights from Kate Glazebrook @ Applied

For the third and final session in our series Diversity and Inclusivity, Kate Glazebrook shared with us her work on unconscious biases that exist during hiring practices and efforts to remove them. Kate is a Principal Advisor (Head of Growth and Equality) at the Behavioural Insights Team, a UK government institution dedicated to the application of behavioral sciences. Kate is also the co-founder of Applied, a service that incorporates leading behavioral science research to remove unconscious bias from hiring practices.

Diverse groups are important for many reasons. They have been shown to process more deeply and prepare more before considering issues. Diverse teams are also more creative, accurate, and less prone to groupthink. In essence, it’s beneficial to have people who think about things differently. Still, twice as many FTSE100 bosses are named John as there are women.

Implicit biases exist within our hiring practices that contribute to this lack of diversity. Unfortunately, there is little evidence to suggest diversity training works. In response to this, Kate shared her work with Applied to remove some of these biases, applying behavioral science research and what we know about how people make decisions and analyze information. Particularly, they have developed technology to remove bias and improve the effectiveness of hiring. It is built around five key features:

  1. Anonymize: remove all identifiers from candidates material that are irrelevant to job but may affect decision-making.
  2. Chunk: group candidate applications in signal dimensions. Horizontal review improves objectivity and decreases cognitive load on the reviewer. It is difficult for people to compare things that vary on multiple dimensions and as a default we choose options that are safer (i.e. more familiar or similar to ourselves). Additionally, when reading applications vertically, there exists a halo effect where impressions we form at the beginning affect how we assess information thereafter – a great or terrible first paragraph can disproportionately affect your chances.
  3. Harness crowd: aggregate input from multiple independent reviewers. Collective wisdom is better than even a singular expert. This approach also allows for having reviewers view batches of candidates responses in different orders. Kate says a team of three independent reviewers is the optimal size for a balance between accuracy in choosing the best candidate and utilizing resources.
  4. Test what counts: shift assessment from measures on CVs that do not reflect job success to using work sample tests and structured interviews.
  5. Intelligent feedback: measure what works and build on this feedback.

Kate also shared with us ways language matters during the recruitment process before applicants apply. How diversity is described matters. Saying diversity is important because of issues related to equitability increases the number of ethnic minorities applying. Saying it is important because we value differences does not, perhaps because of fears of being tokenized. Also, gendered language impacts who applies. Words like “helping” and “collaborative” increase the number of female applicants compared to words like “individual”, “drive”, or “competitive.”

Thanks, Kate, for sharing interesting insights from behavioral science research and for chatting with us. You can check out Kate’s TEDx talk here. She also mentioned this book, What Works: Gender Equality by Design by Iris Bohnet, for those interested in learning more.
-Kristin Lee