The FCA, the ICO and the European Commission all announced new AI-related initiatives yesterday, 19 February 2020.  The FCA announced a year-long collaboration with The Alan Turing Institute that will focus on AI transparency in the context of financial services. The ICO announced a new consultation on its draft Guidance on the AI auditing framework, and on the same day, the European Commission published its White Paper: On Artificial Intelligence - A European approach to excellence and trustwhich lays out options for a European Union-wide regulatory framework on AI.

These initiatives follow a number of other regulatory developments relating to AI that have already taken place in 2020. This shows the level of scrutiny that regulators and lawmakers are currently giving to the deployment of AI.

FCA collaboration on AI transparency

The FCA's collaboration with The Alan Turing Institute will focus on AI transparency in financial services. The FCA's announcement acknowledges that, along with all of the potential positives that come from the use of AI in financial services, the deployment of AI raises some important ethical and regulatory questions. It considers that transparency is a key tool for reflecting on those questions and thinking about strategies to address them.

Along with announcing this initiative, the FCA has set out a high-level framework for thinking about AI transparency in financial markets. The framework operates around four guiding questions:

  1. Why is transparency important?
  2. What types of information are relevant?
  3. Who should have access to these types of information?
  4. When does it matter?

Because the opportunities and risks associated with the use of AI may vary, the FCA does not think that a 'one-size-fits-all' approach to AI transparency can be followed. Instead, the FCA suggests that decision-makers develop a 'transparency matrix' which can be used to map different types of information to different types of relevant stakeholders and help structure a systematic assessment of transparency interests.

The FCA's collaboration with The Alan Turing Institute follows a similar link-up between the ICO and The Alan Turing Institute called Project ExplAIn, which aimed to provide guidance about explaining AI decisions to the individuals affected by them. One output from this was the publication in December 2019 of the ICO's new draft guidance Explaining decisions made with AI, which was created in conjunction with The Alan Turing Institute. You can read more about the draft guidance here. The ICO's consultation on that closed in January 2020 and the final guidance is expected later this year.

ICO consultation on AI auditing framework guidance

This new draft guidance provides advice on how to understand data protection law in relation to AI and gives recommendations for technical and organisational measures that can be implemented to mitigate the risks that the use of AI may pose to individuals. It deals with:

  1. Accountability and governance;
  2. Lawfulness, fairness and transparency in AI systems;
  3. Security and data minimisation in AI; and
  4. Enabling individual rights in AI systems (e.g. rights of information, access, rectification, erasure, and rights in relation to solely automated decisions).

The ICO says that it is eager to hear views on the draft guidance from people who have a compliance role (e.g. DPOs, general counsel, risk managers) as well as technologists (e.g. ML experts, data scientists, software engineers, IT risk managers).

The consultation closes on Wednesday 1 April 2020.

EC White Paper proposes regulation of high risk AI applications

On the same day that the FCA and the ICO made their AI-related announcements, the European Commission published a White Paper that lays out options for a specific regulatory framework on AI.

The proposed regulatory framework would focus on high risk AI applications. The White Paper defines 'high risk' applications as those which:

  1. Are deployed in sectors where, given the nature of activities usually undertaken, significant risk can be expected to occur; and
  2. The AI used in that sector is, in addition, done so in a way that significant risks are likely to arise.

The EC proposes that the regulatory requirements might cover:

  • Training data;
  • Data and record-keeping;
  • Information to be provided;
  • Robustness and accuracy;
  • Human oversight; and
  • Specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification.

In the White Paper, the EC suggests that mandatory assessments, inspections and certifications could be used to verify that AI systems - including the algorithms and data sets used by them - conform to the new regulatory requirements.

The consultation on the proposals set out in the white paper close on 19 May 2020

AI keeping regulators busy

It is a busy time for regulatory initiatives relating to AI and yesterday's announcements follow a number of other developments in the first two months of 2020. 

In January 2020 the European Banking Authority published a new Report on big data and advanced analytics, in which it identified some key risks associated with the deployment of AI and ML technologies - see further here.

Building on their joint survey on Machine learning in UK financial services, the FCA and the Bank of England announced in January 2020 that they are establishing a forum to further dialogue with the public and private sectors to better understand the use and impact of AI and machine learning within financial services – see further here.