On 2 December 2019 the ICO published its first draft regulatory guidance into the use of AI.  The guidance is entitled Explaining decisions made with AI.  It was created by the ICO in conjunction with The Alan Turing Institute.  The draft guidance is open for consultation until 24 January 2020.

The ICO says that the aim of the guidance is to help organisations explain how AI-related decisions are made to those affected by them (the 'explainability' of AI systems has been the subject matter of Project ExplAIn, a collaboration between the ICO and The Alan Turing Institute).  

The draft guidance is based on four key principles (which have their origins in the GDPR) that organisations should think about when developing AI systems:

  1. Be transparent
  2. Be accountable
  3. Consider context
  4. Reflect on impacts

The guidance is not short (c.160 pages) and is divided into three Parts:

  1. The basics of explaining AI
  2. Explaining AI in practice
  3. What explaining AI means for your organisation

Part 1 (The basics of explaining AI ) covers some of the basic concepts (e.g. what is AI? what is an AI-assisted decision?) and the legal framework (e.g. the GDPR and the Data Protection Act 2018).  This part of the draft guidance proposes six 'main' types of explanation that the ICO/The Alan Turing Institute have identified for explaining AI decisions: rationale explanation, responsibility explanation, data explanation, fairness explanation, safety and performance explanation, and impact explanation.

Part 2 (Explaining AI in practice) - the lengthiest of the three parts - is practical and more technical in nature.  It provides guidance on how you might go about explaining meaningful information about the logic of your AI system.  This includes examples that apply the six main types of explanation introduced in Part 1.

Part 3 (What explaining AI means for your organisation) focusses on the various roles, policies, procedures and documentation that organisations should consider implementing to ensure that they are in a position to provide meaningful explanations about their AI systems.  This part of the draft guidance covers the role of the 'AI development team' (which includes the people involved with inputting data into the AI system, with building, training and optimising the models that will be deployed in the AI system, and with testing the AI system) as well as the Data Protection Officer (if one is designated) and other key decisions makers within an organisation.

The ICO blog post announcing the opening of this consultation states that real-world applicability is at the centre of its guidance.  It will be interesting to see what sort of feedback the ICO receives, in particular from those who are already deploying AI systems.