New ICO guidance: explaining decisions about individuals made using AI systems

Our team: Tim Wright


In April 2019, the UK Government published the AI Sector Deal policy paper[1] which placed machine learning, AI and big data at the heart of the UK’s Industrial Strategy[2]. Other Sector Deals announced at same time include life sciences, construction, and the automotive sector.

Recognising the need to make sure AI benefits everyone in the UK, the AI policy paper lays out the requirement for The Alan Turing Institute (the national institute for data science and AI) and the Information Commissioner’s Office (ICO) to work together to develop guidance to assist in explaining AI decisions. Alongside this, the government announced that it would establish a Centre for Data Ethics and Innovation[3] to advise on the safe and ethical use of data, including for AI.

The ICO and The Alan Turing Institute opened a consultation on their draft Explaining Decisions Made With AI guidance in January 2020 and recently published the finalised (non-statutory) guidance.[4]

What does the guidance say?

The guidance sets out what explanation organisations should give when using AI systems to make decisions about individuals. It applies wherever AI is used, whether or not there is human input or intervention in the decision-making process – the basic requirement is that organisations must be able to demonstrate how the AI system’s developers acted responsibility to ensure that the reasoning behind an AI-assisted decision is clear, and that the individuals concerned can obtain meaningful information about the system logic so that they can express their point of view and, where appropriate, challenge the decision.

The guidance is not binding but is intended to be practical guide which clarifies how to apply data protection obligations in this area and highlights best practice. As always, being able to demonstrate that the guidance was considered and followed is likely to be extremely helpful should any issues arise later on.

The guidance is split into 3 parts:

  • Part 1: the basics of explaining AI
  • Part 2: explaining AI in practice
  • Part 3: what explaining AI means for your organisation

In addition there are a number of annexes, which include a worked example of using AI in cancer diagnostics (annexe1), as well as a useful table describing various algorithmic techniques (annexe 2).

The guidance also highlights other legal obligations relevant to good practice when explaining AI-assisted decisions such as the Equality Act 2010.

Who is the guidance for?

The guidance helpfully points the reader to particular areas depending on their role in the organisation – part 1 is for data protection officers (DPOs) and compliance teams, part 2 is aimed primarily at technical teams (although DPOs and compliance teams will also find it useful), while part 3 – which describes the various roles, policies, procedures and documentation which can be adopted as good practice in this area – is aimed at senior management (although, again, DPOs and compliance teams will find it useful).

Key takeaways:

  • System development

Whether or not development of the AI system is done entirely in-house or with the assistance of third party consultancy or outsourcing partner, or is done by a third party developing products to supply to the UK market, the development team should pay particular attention to the six tasks described in the guidance. These are intended to ensure the design and deployment of appropriately explainable AI systems as well as assisting in providing clarification of the results these systems produce to a range of stakeholders i.e operators, implementers, auditors and affected individuals.

The tasks are listed below (more detail is provided in the guidance itself) and are supplemented by the worked example in annexe 1:

  1. Select priority explanations by considering the domain, use case and impact on the individual
  2. Collect and pre-process the data in an explanation-aware manner
  3. Build the system to ensure that the relevant information for a range of explanation types can be extracted
  4. Translate the rationale of the system’s results into useable and easily understandable reasons
  5. Prepare implementers to deploy the system
  6. Consider how to build and present the explanation
  • System procurement

Organisations procuring AI systems (or significant components) from third parties need to be aware that, as a data controller, they have primary responsibility for ensuring the AI system is capable of producing an appropriate explanation for the recipient of the decision. The guidance points out that organisations procuring AI system from third parties:

  • must be able to understand how the system works as well as being able to extract meaningful information from the system in order to provide an appropriate explanation; and
  • should consider getting the third party to provide training and support, for example so that implementers can understand the model being used.

Next steps

For anyone involved in developing or sourcing AI systems to be used to make decisions about individuals (e.g. loan applications, motor insurance, medical diagnostics, and recruitment screening products), the guidance is a useful and comprehensive resource with worked examples and checklists aimed to ensure that AI systems are used in a safe and ethical way, with due regard to the General Data Protection Regulation and other applicable laws and regulations.

If you are involved in AI system development or implementation, and wish to discuss best practices and approaches to adopt for your project, please contact the author or your usual Fladgate contact.





View by date:

View by author:

Would you like to hear more?