find-partner-btn-inner

Responsible AI in Recruitment

The UK government has released comprehensive guidance on the responsible use of Artificial Intelligence (AI) in the recruitment process. The guidance aims to help organisations identify and mitigate potential risks associated with AI-enabled recruitment tools, while leveraging their benefits.

We set out the key highlights below:

Potential Benefits and Risks

AI can automate and streamline recruitment processes, promising greater efficiency, scalability, and consistency. However, these technologies also pose novel risks, including perpetuating existing biases, digital exclusion, and discriminatory job advertising and targeting.

AI Regulatory Principles

The guidance outlines the UK government's AI regulatory principles that organisations should adhere to when deploying AI systems. These are:

  • Safety, security, and robustness,
  • appropriate transparency and explainability,
  • fairness,
  • accountability and governance, and
  • contestability and redress.

Assurance Mechanisms

The guidance recommends implementing AI assurance mechanisms throughout the procurement and deployment lifecycle to operationalise these principles.

These mechanisms include:

Before Procurement

    • Defining the purpose and desired functionality of the AI system
    • Assessing resources, governance, and employee training needs
    • Evaluating applicant accessibility requirements and data protection compliance

    During Procurement

    • Conducting due diligence on suppliers' claims and evidence
    • Assessing model cards and documentation
    • Performing bias audits and risk assessments

    Before Deployment

    • Conducting data protection impact assessments (DPIAs)
    • Implementing user feedback mechanisms
    • Ensuring transparency by clearly signposting AI use to applicants

    Live Operation

    • Ongoing monitoring and evaluation
    • Addressing user feedback and issues promptly

    Key Considerations

    The guidance highlights several key considerations for organisations using AI in recruitment:

    1. Purpose and Functionality: Clearly define the problem the AI system aims to solve and its desired outputs.
    2. Resources and Governance: Assess how the system will integrate with existing processes, and provide necessary training and oversight.
    3. Applicant Accessibility: Ensure the AI system does not create new barriers or amplify existing risks for applicants with protected characteristics, as per the Equality Act 2010.
    4. Data Protection: Comply with the UK GDPR and Data Protection Act 2018, including conducting DPIAs for high-risk AI systems.
    5. Transparency: Clearly signpost the use of AI to applicants, allowing for contestability and redress.

    The guidance emphasises that AI assurance is an iterative process that should be embedded throughout an organisation's practices to ensure responsible and successful AI deployment in recruitment.

    ‘Somewhat aligned’ to the EU Approach

    Whilst the UK guidance does not explicitly compare its approach to the EU AI Act, organisations now looking to implement AI compliance strategies and policies will be relieved that there are some notable similarities:

    • Risk-based approach: Both the UK guidance and the EU AI Act take a risk-based approach, recognising that different AI systems pose varying levels of risk and require appropriate governance measures.
    • Key principles: The UK guidance outlines five AI regulatory principles (safety, security, robustness; transparency; fairness; accountability; and contestability) that align closely with the key requirements in the EU AI Act, such as transparency, human oversight, robustness, and non-discrimination.
    • Assurance mechanisms: Both the UK guidance and the EU AI Act emphasise the importance of implementing assurance mechanisms throughout the AI system's lifecycle, such as risk assessments, bias audits, and ongoing monitoring, to mitigate potential harms.

    However, the UK guidance departs from the EU approach in some ways. For instance, the guidance is a non-binding set of recommendations and best practices aimed at a specific sector, whilst the EU AI Act, once finally adopted, will be legally binding covering a wide range of actors across multiple sectors in the AI supply chain (e.g. producers, deployers, importers and distributors) and regulating a much broader range of AI systems, with stringent enforcement mechanisms and potentially significant penalties for non-compliance. In addition, the EU AI Act automatically classifies AI systems used for recruitment and employee selection as "high-risk" AI systems, making their distribution and use within the EU subject to strict requirements and obligations, such as having to undergo conformity assessment by notified bodies to verify compliance with the Act's requirements before the system can be placed on the market.

    Preparing for compliance

    The UK government's guidance highlights both the potential benefits and risks of using AI in recruitment processes. On the one hand, AI-enabled tools can automate and streamline recruitment, promising greater efficiency, scalability, and consistency, but on the other hand, these technologies also pose novel risks, such as perpetuating existing biases, creating digital exclusion, and enabling discriminatory job advertising and targeting.

    Responsible AI deployment in recruitment requires a comprehensive approach, with organisations required to implement AI assurance mechanisms throughout the procurement and deployment lifecycle, including clearly defining the purpose and functionality of AI systems, assessing resources and governance needs, ensuring applicant accessibility, maintaining data protection compliance, and providing transparency to applicants. And, where applicable, organisations will need to ensure that their assurance mechanisms and compliance measures are also sufficient to satisfy the requirements of other regulators in the territories in which they operate.

    Ultimately, while automated AI tools can undoubtedly make the recruitment process more efficient, this comes with some risk. Care will need to be taken to ensure that AI tools selected are compatible with existing diversity commitments (such as to guarantee disabled applicants an interview if they meet the minimum criteria) and to ensure that potential bias or discrimination is identified and addressed early.

    Featured Insights

    How can we partner with you?

    Fladgate has always been structured around deep relationships, creating true partnerships with clients.

    get in touch
    Photo