By Dr. Gabriela (Gabby) Burlacu & Manish Tripathi
Stop us if you’ve heard this one: A young boy gets into a car accident with his father, but upon arriving at the hospital he encounters an attending doctor who says, “I can’t operate on this boy—he’s my son!” This riddle (the doctor is the boy’s mother, by the way) is designed to confuse people because we have implicit expectations and ideals around what kind of person should work in a specific job, and these are partially responsible for occupational segregation. Occupational segregation occurs when one demographic group tends to dominate an industry or job. In the United States, for instance, Nursing, Teaching, and Human Resources jobs tend to be filled by women. STEM jobs, on the other hand, tend to be filled by men, so much so that getting more girls and women interested in STEM fields is a nation-wide initiative. And of course, when we hear “doctor”, we tend to think male. The issue is, the different roles men and women tend to fill are not created equal: these disparities also bring about disparities in pay, working conditions, and career trajectories.
Research suggests that it’s not just our implicit ideals that drive us to pursue the “right” kinds of careers and jobs, but also how those jobs are advertised. Words and phrases used in job descriptions that are meant to convey an exciting field of work can sometimes carry gender bias, deterring either men or women from applying because they perceive they might not be a good fit for the role. While terms like “crush the competition!” and “be a rockstar” may speak to the competitive, fast-paced, thrilling nature of a job, they also communicate to women that their contributions may not be totally valued there. Research has shown that presenting the exact same job using different terms can significantly impact the relative rates of men and women that apply, even in a predominately male-dominated field like engineering.
This terminology—“crush the competition”, for instance– may not deter every single woman from applying to every single job that contains it, but this effect adds up over time and across industries. Given enough time, and enough masculine or feminine job descriptions, industries start to see a significant impact on the gender compositions of their workforces. The effect of a job description containing gender biased language is not just limited to that applicant pool in that moment, but also impacts the workforce of the future in a self-perpetuating cycle. If fewer women or men apply for that role, fewer end up working in that field. The few women or men who do work in that field will be more likely to experience exclusive cultures and ultimately leave. These factors contributing to underrepresentation also contribute to fewer role models in that occupation. And fewer role models ultimately result in fewer men or women applying, and the cycle starts over again. In short, finding a way to create inclusive, gender neutral job descriptions is not just an issue for the individual job or the individual job applicant, but also for the societies and systems we live in.
But if our words and phrases are unintentionally causing this effect, how can we “check” ourselves and ensure we aren’t deterring an entire pool of key talent from applying to work in our companies? The technologies emerging in this area are very promising. Because language is complex and words that carry gender bias may differ across cultures, industries, and contexts, a simple library of masculine and feminine words is not enough to adequately capture the impact our language could be having. To truly create change, technology needs to understand, capture, and leverage the complexity of the issue. Machine learning and linguistic analyses can not only learn from historical events where job descriptions captured mostly male or mostly female applicant pools, but also mine data across the Web to pick up on language styles that people use every day and incorporate this into an algorithm that can determine masculine and feminine tones in language. These tools can ensure the identification of words and phrases that truly carry gender bias in different contexts, helping recruiters overcome their inherent biases around whether a word is likeable or sounds exciting.
Coming back to our medical example, since machine learning identifies patterns in the data feeding it, this can result in words like “doctor” and “leader” being associated as masculine, but this is only because we as a society associate these words that way—and the machines are learning from us. Instead of dismissing this kind of output as noise, perhaps it can educate us on the biases that exist in the language we use every day. Technology that can capture these linguistic tendencies in our job descriptions specifically can not only help us craft better recruiting materials, but ultimately pave the way for greater understanding and action around how we approach gender bias in our speech, thoughts, and actions in the workplace and beyond.
Identifying and eliminating gender biased language in job descriptions is just one piece of the larger workforce inclusion puzzle. We need to be able to identify this kind of language not just in how we attract job applicants, but also in how we manage and develop existing employees, ensuring the retention of top talent across demographic groups. We need to find associations between language and other kinds of bias, like that which deters people of different ages, levels of physical ability, and socio-economic status from pursuing opportunities they are qualified for. And we need to do it now, before this talent goes to your competitors.
This article was first posted on the SAP Community.