On 22 May 2019, 42 countries have adopted the first ever set of intergovernmental principles on the use of artificial intelligence (AI) at an OECD conference. The principles include five value-based principles for the development of AI and five recommendations for public policy. In summary, the document states that:

  • AI should benefit people and planet and be designed in a way that respects the rule of law, human rights, democratic values and diversity;
  • AI systems should be transparent and safe and their risks should continually be assessed and managed.
  • Those developing, deploying or operating AI systems should be held accountable for their use of the technology.

The principles are not legally binding but should  allow the OECD to monitor and compare the progress of the 42 signatories in their development and use of AI. The OECD has a history of producing policy documents that contribute to later national and international legislation. For example, their 1980 privacy guidelines, which highlighted the need to set limits on the collection and use of personal data, was the groundwork policy which the European Union considered for its General Data Protection Regulation law.

The release of the OECD principles on AI follow the publication of by the European Commission’s High Level Expert Group in April this year of its Ethics Guidelines for Trustworthy AI. In addition, just today (23 May), the International Technology Law Association (ITechLaw) has published Responsible AI: A Global Policy Framework, a new book that provides an in-depth review and eight discussion principles related to ethical guideposts that encourage the responsible development, deployment, and use of artificial intelligence.

There certainly appears to be a lot of thought leadership happening in relation to ethical issues in AI.