Women believe AI reduces bias in recruitment process – but does it?

A recent study showed that women prefer AI assessors in recruitment, but perhaps they just prefer them over men.
AI can piece together information to discriminate applicants based on gender and women are likely to suffer. Two brown hexagon mirrors held up to a face, half obscuring it.

The Monash Business School recently shared some surprising research that found women believe artificial intelligence (AI) assessments reduce bias in the job recruitment process, while men fear it removes an advantage.

The study was conducted through two field experiments. In the first experiment, more than 700 applicants for a web designer position were informed whether their application would be assessed by AI or by a human. When told that their applications would be assessed by AI, women were more likely to complete their applications.

The second study was conducted from the recruiters’ perspective, where 500 tech recruiters were asked to score applications when the gender of the applicant was revealed, when gender was hidden, and when they had access to both the AI score and the applicant’s gender.

Women were consistently scored lower than men when recruiters knew the gender of the applicant, but this bias was removed both when the gender was hidden, and when recruiters had access to the AI score.

In response to these findings, Professor Andreas Leibbrandt, from the Department of Economics, drew the conclusion that recruiters “use AI as an aid and anchor – it helps remove the gender bias in assessment”.

This research may suggest that AI is a more neutral counterpart to human assessors, but it’s important to note that Leibbrandt’s focus is on the human interaction with AI, rather than the algorithm behind it.

While humans may instinctively trust AI to be ‘objective’, algorithmic biases in AI are blatantly discriminatory.

Can humans undo recruitment bias in AI?

In ‘Algorithmic Bias in Job Hiring’ published in July this year by the University of Minnesota’s Gender Policy Report, researchers found that “women were more likely to prefer the algorithm if the alternative was a male rather compared to a female rater” (writer’s emphasis).

This means that women do not inherently have more faith in AI, but that they believe the alternative of having men rate their application is more detrimental. Women would rather have their fates determined by a machine than a man.

The report adds, “As AI becomes further integrated into workplace hiring, this may exacerbate gender bias in employment. Victims of discrimination will have no way of pointing their finger at someone who has committed a misdeed, since algorithms cannot be held accountable or brought to justice for bias.

”For women, discrimination embedded in algorithms could therefore be more problematic than that of biased humans, especially if algorithms are seen as neutral and objective by women themselves. This could reduce their awareness of potentially discriminating decisions.”

Algorithmic literacy is highlighted as especially important, so that women can identify these biases and question or resist algorithmic evaluations that are discriminatory.

In research published by the University of Melbourne in 2023 titled ‘When it comes to jobs, AI does not like parents‘, it was found that not only can AI’s algorithmic bias bypass practices like ‘resumé blinding’ and discriminate through gender cues, it can also pick up a parental leave gap in your CV.

ChatGPT ranked CVs lower when researchers added a gap of around two years for parental leave between jobs, regardless of whether the applicants were male or female. This bias will ultimately be more detrimental to women, who take on the bulk of care and are more likely to have parental leave gaps in their CVs than men.

Algorithms are not inherently biased, but it can be unintentionally introduced – it all comes down to training data. AI systems utilised by recruiters are trained predominantly on male employees’ CVs. AI systems can learn to associate qualities with a binary perception of gender, even if the words ‘male’, ‘female’, ‘men’ or ‘women’ are not present.

Decreasing algorithmic discrimination in the recruitment process requires changes starting at the dataset, as well as the people who are working in the tech industries. A UNESCO study highlighted that women currently represent only 20% of employees in technical roles in major machine learning companies, 12% of AI researchers and 6% of professional software developers. In addition, over 80% of AI professors are men.

Read: Rise of social media hacking devastating creative small businesses

The study presented by Leibbrandt poses a hopeful future where AI may help recruiters take a more objective stance, but not if the tools they are relying on are already flawed. Instead of removing bias, it’s only a matter of where the bias occurs and, alarmingly, algorithmic bias is much more opaque.

The current issues at hand operate like a loop, especially looking at the male dominance in the field of machine learning: we need gender equity in the recruitment process to decrease bias in AI systems that are being used for recruitment.

Recruitment practices that utilise AI assessment of course have their advantages, but there’s a fair amount of work that needs to be done before it can be anti-discriminatory. For the majority of us whose careers may be affected, algorithmic literacy is the first step towards identifying bias and holding organisations accountable on fair hiring practices.

Celina Lei is the Diversity and Inclusion Editor at ArtsHub. She acquired her M.A in Art, Law and Business in New York with a B.A. in Art History and Philosophy from the University of Melbourne. She has previously worked across global art hubs in Beijing, Hong Kong and New York in both the commercial art sector and art criticism. She took part in drafting NAVA’s revised Code of Practice - Art Fairs and was the project manager of ArtsHub’s diverse writers initiative, Amplify Collective. Most recently, Celina was one of three Australian participants in DFAT’s the Future of Leadership program. Celina is based in Naarm/Melbourne. Instagram @lleizy_