Daily News

View All News

UK issues guidance for responsible AI in recruitment

26 March 2024

Adopting Artificial Intelligence (AI)‑enabled tools in HR and recruitment processes could pose novel risks, including perpetuating existing biases, digital exclusion, and discriminatory job advertising and targeting, according to the ‘Responsible AI in Recruitment guide’ published by the UK government’s Department for Science, Innovation and Technology (DSIT).

The guidance identifies potential ethical risks of using AI in recruitment and hiring processes. However, it noted that tools for trustworthy AI, including AI assurance mechanisms and global technical standards, can play a vital role in managing these risks and building trust.

The guide further outlines how AI assurance mechanisms can provide organisations with the tools, processes and metrics to evaluate the performance of AI systems, manage risks, and ensure compliance with statutory and regulatory requirements.

It is aimed at organisations seeking to procure and deploy AI systems in their recruitment processes. The guidance is written for a non-technical audience and assumes a minimal understanding of AI and data-driven technologies, and is appropriate for organisations with or without a comprehensive AI strategy.

According to the guide, all stages in the recruitment process, including sourcing, screening, interviewing and selection, carry a risk of unfair bias or discrimination against applicants.

“Additionally, inherent to these technologies is a risk of digital exclusion for applicants who may not be proficient in, or have access to, technology due to age, disability, socio-economic status or religion,” the guidance states.

The guidance outlines a range of considerations that should be considered by all organisations seeking to procure and deploy AI in recruitment. Alongside this, the government outlines options for mechanisms that may be used to address concerns, actions or risks identified as a result of these considerations.

“As AI becomes increasingly prevalent in the HR and recruitment sector, it is essential that the procurement, deployment, and use of AI adheres to the UK government’s AI regulatory principles, outlined in ‘A pro-innovation approach to AI regulation’.

These principles are safety, security and robustness, appropriate transparency and explainability; fairness, accountability and governance; and contestability and redress.

Tania Bowers, Global Public Policy Director at APSCo, said, “Having worked closely with the DSIT to develop the guidance, we are pleased to see the information now publicised.”

Bowers said AI has significant potential in the staffing sector, however, she added, “as with any new tools, there are inherent risks that must be mitigated against, and clear guidance such as this has a crucial role to play.”

“This is particularly timely given the vote earlier this month in favour of the EU AI Act. While there will be an implementation period, this new regulation will impact any recruiters with operations in or who provide services into the EU.”

“There are a number of principles that staffing companies must guarantee they are following so that they aren’t exposing their business to potentially discriminatory systems or inadvertently implementing AI that doesn’t follow the required functions or intentions that these tools should be used for,” Bowers said.

“It’s important to add that these guidelines have been develop to inform staffing firms and aren’t written into law,” Bowers added. “In order to support our members as they navigate the complex AI landscape, we are producing a ten-step plan which provides recruitment firms a roadmap to follow so that they are compliantly implementing new tools into their solutions.”

The government’s guidance was developed with feedback and contributions from the ICO (Information Commissioner’s Office) and EHRC (European Convention on Human Rights), alongside the REC (Recruitment and Employment Confederation), APSCo, CIPD (Chartered Institute of Personnel and Development), charity Autistica, and the Ada Lovelace Institute.

Earlier this month, the European Parliament approved its landmark Artificial Intelligence Act. The regulation aims to “protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field.”