Yesterday at CMS we had an exciting breakfast seminar to learn about a new category of technology, "ethical AI technology". Examples of ethical AI technology include technology for translating neural networks into other forms of machine learning data structure which generate human understandable explanations for the decisions they compute. Another example is technology for storing state of an autonomous agent in a tamper proof and deception proof way so that in the event of harm, the stored state can be accessed to help determine accountability. Another example is technology for bias checking, to check whether a prediction computed by a machine learning system exhibits counterfactual fairness.
Researchers from the Alan Turing Institute have published a research paper explaining how a causal model can be created of a machine learning system and then the causal model can be used to check counterfactual fairness. They explain that counterfactual fairness "captures the intuition that a decision is fair towards an individual if it is the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group.
In the seminar we discussed futuristic scenarios, including an "ethical AI operating system" which uses logics to generate mathematical proofs that applications executing using the operating system are correctly applying ethical values coded into the operating system. We discused a future scenario of an AI service in the cloud which has an integrated counterfactual fairness checker.
We were joined by speakers from the UK and European patent offices who explained that these future types of technology are potentially protectable using patents, provided various requirements for patentability are met and various exclusions from patentability are not applicable.
Making algorithm-led decisions fair by ensuring their outcomes are the same in the actual world and a 'counterfactual world' where an individual belongs to a different demographic