By the looks of it – a lot of effort.
A joint paper recently published by UK Finance and Microsoft provides some useful insights in the context of implementing AI by financial institutions. These are just a few:
- AI is not a mere technology tool. Principles of fairness, privacy and security, transparency, and accountability are a starting guide to consider in the context of broader implications of AI and its appropriate use.
- Organisations must a) insist on processes to identify bias in datasets and ML algorithms, b) be transparent around how AI models make decisions so that others can judge and challenge definitions of fairness. Explainability of AI/ML is vital for customer reassurance and increasingly it is required by regulators.
- Without appropriate testing, governance and control, a rapid growth in AI models could have significant reputational impact and subsequent reductions in consumer trust.
- AI skills and expertise should not just be in the domain of technology teams. The drive for AI adoption needs to start at the top of the organisation and filter down to all levels.
- By focusing on small incremental wins with a clear Return on Investment, while building an AI driven culture, organisations can maximise the opportunity that AI brings.
AI has become a focus for consumers, institutions and regulators globally. In the UK, the Office for AI and the AI Council have been established. In Europe, the European Commission has developed ‘Ethics guidelines for trustworthy AI’. As new capabilities and models emerge it is important for institutions to understand how they can take advantage of each new development. However, while the pressure to stay ahead has never been greater, it is critical that this is done in a responsible way. Consumers are increasingly mindful of the security of their data and how it is being used, and institutions are aware that new capabilities can also create new potential liabilities.