Skip page header and navigation

How to transparently use artificial intelligence during your hiring process

Staffing Stream

How to transparently use artificial intelligence during your hiring process

Kip Havel
| December 11, 2024
Image
AI Recruitment news conception photograph and the development of organizations with quality systems for the advancement

main content

People have polarizing opinions about artificial intelligence. Look no further than the Capterra Job Seeker AI survey to see the split between the advocates and detractors:

  • 62% of job seekers believe their chances of earning a job would be better if AI were used in hiring and recruiting.
  • 38% of respondents were turned off by companies that used AI too frequently during the hiring process.

That’s a significant split staffing leaders can’t dismiss. Though AI has improved quickly, this technology is only as good as its data models and users. When AI is transparently transactional or appears discriminatory, a meaningful percentage of the talent pool will take their careers elsewhere.

With that said, we’re not suggesting you put the future of AI in hiring on pause. Instead, we recommend that staffing leaders apply careful consideration to how they implement this game-changing technology.

Here are some tips for your organization to prioritize as you deepen your use of artificial intelligence in hiring.

Why You Need to Be Transparent About Your AI Strategy

The staffing and recruiting industry is built on relationships, all of which require a sense of trust between candidates and recruiters. If top talent doubt that your recruiters are going to treat them like people, giving them individual attention when it counts, they might be skeptical of your recommendations and unlikely to feel any loyalty.

Transparency applies to more than just generic messages. Increasingly, AI is being used throughout the entire hiring lifecycle from sourcing and screening to interviewing, assessing and onboarding. Though these use cases can accelerate the hiring process, they can also (if not audited) entrench biases or greenlight flawed decision-making.

Consider AI assessment games. If there is a lack of face validity in AI assessments (e.g. the measures and methods are unclear, inappropriate or flawed) then not only are the findings useless, they can prevent you from finding the perfect candidates.

Author Hilke Schellmann outlined in her book The Algorithm how some AI tools failed to infer personality traits from the games they provided clients. When a former employment lawyer reviewed the vendor’s technical report, he stated the platform “does not measure a candidate’s ability to perform the functions of any specific job, and … the things it does measure are not all that predictive of a good job performance.” If your organization lacks transparency into the tools you use, then you risk missing top performers — or even being sued.

Transparency protects your ability to serve candidates as well as avoid fines and flawed decisions. 

How to Enhance Your Artificial Intelligence Uses

Creating a game plan for transparency requires intentionality throughout the entire process. Here are some straightforward steps to take:

Train your people on prompt engineering. Artificial intelligence can’t read anyone’s mind, and it’s definitely not perfect. However, teaching your recruiters to extract the desired messaging from artificial intelligence can elevate the quality of messages to candidates.

Encourage them to be specific, set parameters, give context, break down complex tasks and even refine requests until they get what they want. Whether they’re generating messages, scraping job boards or setting assessment requirements, they need to think of the potential outcome — and review the results with a careful eye.

Keep a human in the loop. Entrusting AI with the authority to make independent decisions can backfire. For mundane messaging, there might not be as much blowback, but if your screening or assessment processes accidentally start to disqualify candidates based on protected classes, then you might be at risk of lawsuits.

Keeping a human in the loop or regularly auditing your AI-powered processes can ensure that biased or flawed decision making doesn’t persist for too long. Plus, involving the right people in the review process can ensure ethical treatment is achieved.
 

Review your AI vendorsd If you choose to purchase an off-the-shelf AI solution, you need visibility into their processes. How do their AI programs make decisions? Does their organization review or conduct QA on the decision-making process? How do they prevent or eliminate bias in their tools?  

Though AI is making big waves, following the above steps can ensure you’re using these tools in ways that will make candidates feel safe, secure and valued.