Skip to main contentA logo with &quat;the muse&quat; in dark blue text.
Advice / Job Search / Finding a Job

How AI Works in Recruiting, And What to Do If You Suspect Bias

Getty Images
Getty Images

Artificial intelligence (AI) seems to be everywhere and doing everything lately. It’s writing emails, handing out relationship advice, generating music, creating art—and it’s also very likely deciding whether or not you get that job you applied for.

A large portion of the Fortune 500 companies are using AI for recruiting in some way, shape, or form, according to Ben Winters, senior counsel at the nonprofit Electronic Privacy Information Center (EPIC) and leader of EPIC’s AI and Human Rights Project, where he focuses on how AI can disproportionately affect marginalized communities in education, hiring, medicine, and other areas. “It’s really throughout the whole life cycle,” Winters says.

You’ve probably already encountered AI in your job search without even knowing it. If you’ve ever submitted your resume to an online job posting, most likely it’s been thrown into an applicant tracking system (ATS), or automated software that scans for keywords relevant to the role.

But AI can pop up in other parts of the hiring process, too—and it can serve as both a benefit and a detriment to job seekers, depending on how it’s used. Below, we’ll break down where you might encounter AI, the biases that may emerge from it, and what you can do as a hiring manager or job candidate to ensure that each application gets a fair shot.

Where AI exists in hiring and recruiting

Aaron Sines, the director of talent, AI and machine learning, at the technical recruiting company Razoroo, says AI is mostly used by recruiting teams to streamline and enhance the hiring process.

This can manifest in a number of ways:

  • In job ads: Algorithms and machine learning can help companies determine where they should advertise certain job postings and to whom. On the job seeker side, it might serve you ads tailored to your online profile or search history. (Think: recommended jobs on LinkedIn or Indeed.)
  • In recruitment efforts: Companies, often with the help of third-party vendors, can use AI to source candidates before they even catch wind of a job opening—by indexing social media, for example. Sines says if you’ve ever posted your resume on a major job board, you may have opened up your information for other softwares to use and get in touch with you. Generative AI—made popular by ChatGPT—can also be leveraged by recruiters to write personalized outreach messages quickly.
  • In job descriptions: Generative AI can help companies craft job postings with minimal effort. LinkedIn recently made this available to organizations that use its platform.
  • After you apply: AI can assist in screening resumes or scanning cover letters for keywords to whittle down a hiring manager’s pile. “It’s going to rank candidates based on the qualifications you have in your description or whatever you set in your filter criteria when you’re sorting through candidates,” Sines says. Some softwares, he adds, might go a step further and use predictive analytics to decide a candidate’s potential in a role or team against certain criteria.
  • During the interview: Winters notes that some companies might use AI to conduct voice or facial analysis, or even prompt games, to gauge a candidates’ soft skills or engagement.
  • In the candidate experience: Generative AI is increasingly being used to improve communication between candidates and employers throughout the process. For example, Sines says, candidates might be able to ask questions about a role or company via chatbot, or receive automated email or SMS notifications based on the stage they’re in.
  • After you’re hired: Even when you get the job, AI could still be a factor: “There’s a lot of automated systems that are doing productivity scores, promotion recommendations, stuff like that,” Winters says.

The benefits of using AI in recruiting

With AI at their fingertips, companies have the potential to reduce the time and costs associated with attracting and vetting candidates. “Think of how much time in hiring and recruiting is—for lack of a better term—wasted on candidates who don’t fit the profile or candidates who don’t have some of the basic requirements for a certain role,” Sines says. “That’s where we’re seeing a huge upside in using AI in recruiting.”

AI can also take out some of the work for job seekers looking to get in front of certain companies. “It allows candidates to be in this position where opportunities can come to them more organically,” Sines says. Generative AI and chatbots, too, can offer more transparency in the job search by providing candidates with timely updates and tailored responses.

Finally, AI lends itself well to scalability. “For any company that’s looking to grow and has challenges filling critical roles, it’s almost something you have to be using right now,” Sines says.

The downsides of using AI in recruiting

The reality is that while AI may be faster than a human, it’s less trustworthy and more rigid. “Human recruiters can establish rapport with candidates,” Sines says. A person is also more likely to pick up on unmeasurable—but just as crucial—soft skills than a robot.

Sines adds that AI might struggle with the nuances of certain job titles or keywords. For example, if vetting for software developers, AI might skip over resumes that include “engineer” or “coder,” or abbreviations like HW/SW (hardware/software), if it’s not trained to scan for those terms.

Beyond the human element, both Sines and Winters highlighted three major concerns they have with AI being used in hiring and recruiting: data privacy, lack of regulation, and candidate bias.

Data privacy

Because of how little transparency there is into how a lot of AI software and algorithms work, Sines says this can bring up issues around data privacy. In other words, job seekers may not have control, or even awareness of, the information being used about them.

“When we talk about being careful about what you share, it’s never been more real than it is now with everything you’re posting being indexed—and being indexed by where, we don’t always know,” Sines says.

“Regulation is definitely going to be important, and it should prioritize, above all, transparency. Right now there aren’t requirements for a lot of these tools to disclose to users how they’re using the data,” he adds.

Lack of regulation

AI—and AI in recruiting specifically—is fairly unregulated as of now, which leaves room for a plethora of problems for companies and workers alike, such as discrimination (more on that below). Without regulation, AI companies can also capitalize on keeping workers and companies in the dark.

“A lot of times that’s the proprietary ‘secret sauce’ that makes these platforms appealing—you don’t know how it works,” Sines says. “It feels like magic.”

But that’s all starting to change: A new law in New York City, for example, requires companies that use automated employment decision tools (AEDTs) in hiring decisions to conduct annual audits to ensure their systems are free of bias. They must also post the results of their audit for the public to see, notify applicants when they’ll be evaluated by an AEDT, and include instructions for requesting reasonable accommodations, should the applicant need it. A similar law passed in Illinois in 2019 requires companies to inform applicants of any AI involvement in video interviews and obtain the applicant’s consent.

Bias and discrimination

The most prominent issue surrounding AI recruiting software is the potential to tilt toward certain demographics and away from others.

There are two approaches AI can use to source candidates for a particular role, Sines says: a skills-based approach, where applicants are evaluated based on how closely their skills match the job, and a matching approach, where applicants are evaluated based on how they compare to past or existing employees. The latter is where a lot of biases can emerge.

“It can perpetuate that historical hiring bias, and that can just further perpetuate any diversity issues,” he says.

For example, maybe the majority of your tech team is composed of white men—AI might look at that data and only serve you male candidates. (This happened to Amazon a few years ago—the company ended up scrapping its proprietary recruiting engine after an audit showed it was biased against women.)

Or maybe most of your past hires come from Ivy League schools. As a result, AI might exclude candidates that come from HBCUs or public colleges, even if their resumes match the role in every other way.

Winters adds that some facial recognition software can be biased against certain races or genders (his organization filed a federal complaint against AI-hiring company HireVue in 2019 for its use of face-scanning technology); speech recognition software may not pick up certain dialects or accents; and games or assessments might discriminate against older job seekers and those with learning disabilities, such as dyslexia, or visual impairments, like color-blindness.

“Both for employers and employees, there could be missed opportunities there due to the rigidity of it,” Winters says.

What job seekers can do about AI bias

Sines says that while he’s spoken to plenty of candidates who were convinced they were passed over due to bias screenings, there’s not much you can do to avoid this happening beyond what you can control.

“You can certainly reach out to someone within the company and ask more questions,” he says, to get clarity on why you were rejected and feedback for future job opportunities. He also recommends “selective keyword placement”—using relevant keywords throughout your resume and cover letter without “keyword stuffing,” which may turn off an ATS.

Don’t get fancy with your resume, either, he says—such as using images or intricate design elements. “That’s very likely to get blocked by a lot of these algorithms…the formatting just breaks and it can’t index it properly,” Sines says.

If you happen to be given the name of the third-party vendor screening your application or interview, Winters suggests doing a Google search to find out whether the company has been accused of discriminatory behavior. “If you are rejected from a job surprisingly, or if you have some sort of hunch that the determination was through automated means and might also be discriminatory based on a protected class, it’s worth making a complaint to the Equal Employment Opportunity Commission or the DOJ Disability Rights,” he adds. “Applicants could also lobby their local reps at a state or city level to enact regulations.”

What companies can do about AI bias

Sines believes the best recruitment approach is collaboration between AI and humans. In other words, it’s recruiters’, hiring managers’, and decision-makers’ responsibility to monitor and hold accountable any AI tools they use.

“Human oversight is so, so important. Human judgment and using human decision-making will help prevent those unfair outcomes,” he says.

Companies and leaders can start by developing ethical guidelines and frameworks for how AI is used in recruiting. “You should certainly ensure that you’re using diverse and unbiased data if and wherever possible,” he says. “And if you’re one of these larger enterprises where you’re building your own in-house tools, such as Amazon did, it’s really important to train your models on diverse and representative data.” He also recommended scheduling regular audits and having them be conducted by a third party.

When evaluating whether to use an outside vendor, ask for an audit of their system, as well as ask questions around accuracy rates and checks and balances. If you’re noticing patterns where only certain types of candidates are making it through screenings, Winters says, “you could likely talk to the vendor to try to customize your system, or downgrade to a less involved system.”

Overall, he adds, “Hiring managers should be wary of things that are too good to be true.”

The future of AI in recruiting

The good news is that even though AI is becoming more common in hiring, this will hopefully encourage lawmakers to crack down on discriminatory behavior and provide job seekers with more transparency around these tools. Companies, too, will become more liable for how AI functions in their organizations.

“Since they will be held responsible for the vendor’s actions, that hopefully will inspire companies to do more bias audits, to be more careful about what systems they’re using,” Winters says.

Until then, Sines is confident the role of the recruiter is here to stay. “Ensuring that positive candidate experience is something only a human can do,” he says. “But AI will—and can—definitely augment that human experience.”