The ICO (the UK’s data protection authority) has published a new blog post which sets out some of the things that organisations should think about when carrying out a Data Protection Impact Assessment (DPIA) for the processing of personal data in AI systems. The blog is part of a series from the ICO about developing the ICO’s regulatory framework for AI.

If you are considering deploying AI in a new or current system, or have already deployed AI that processes personal data, the blog post provides some useful guidance as to what the ICO would expect to be dealt with in your DPIA.  If your new AI deployment will process personal data and you have been considering whether or not a DPIA is required, the blog post is clear that the ICO considers that using AI to process personal data will usually meet the legal requirement under the General Data Protection Regulation (GDPR) for completing a DPIA.

If you have already undertaken a DPIA for your AI deployment, now may be a good time to conduct a review to consider whether it needs updating in light of the ICO’s blog post or because the nature, scope, context or purpose of the processing has changed (as the blog post points out, a DPIA should be considered a ‘live’ document, subject to regular review).

The following points from the blog post may be of particular interest to anyone deploying AI systems:

  1. It can be difficult to describe the processing activity of a complex AI system. It may be appropriate to maintain two versions of the DPIA: the first presenting a thorough technical description for specialist audiences and the second containing a high-level description of the processing and explaining the logic of how the personal data inputs relate to the outputs affecting individuals. (Following this guidance in relation to DPIAs may mean you are better prepared to respond to any requests you receive from data subjects exercising their right under GDPR to receive meaningful information about the logic used in automated decision-making, as well as the significance and consequences of the processing.)
  2. Where a data processor is used, some of the more technical elements of the processing activity can be illustrated in the DPIA by reproducing information from that processor (e.g. a flow diagram from a processor’s manual). However, the blog warns against data controllers simply copying large sections of a processor’s literature into the DPIA.
  3. Where AI systems are partly or wholly outsourced to external providers, both organisations should assess whether joint controllership has been established under the GDPR and if so, to collaborate in the DPIA process.
  4. If AI systems complement or replace human decision-making, the blog post states that the DPIA should document how the project might compare human and algorithmic accuracy side-by-side to better justify use of AI.
  5. The DPIA should consider risks to data subjects in legal frameworks beyond data protection. The ICO gives as examples that machine learning systems may reproduce discrimination from historic patterns in data, which may fall foul of equalities law, or could stop content being published based on the analysis of a creator’s personal data, which impacts their freedom of expression.
  6. In the context of the AI lifecycle, a DPIA will best serve its purpose if undertaken at the earliest stages of project development. It is important that data protection officers (DPOs) and other information governance professionals are involved early – the DPO’s professional opinion should not come as a surprise at the eleventh hour.