AC Grayling CBE gave a very impressive lecture last week at Cogex about AI ethics. He spoke for almost one hour with no notes and without saying um and err!

Grayling argued that "what can be done will be done" and therefore AI killing machines will be created. He argued by analogy with the Antarctic treaty of the 1950s that an international treaty on AI ethics is needed to deal with AI as fighting machines. 

However, his lecture was less clear on the topic of artificial general intelligence. Artificial general intelligence is technology that could succesfully perform any intellectual task that a human being can. Such an artificial intelligence would be able to learn and would quickly create other artificial general intelligences that would become superior in intelligence to humans at a time refered to as "the singularity". AC Grayling advocated an international treaty on AI ethics but was not clear about whether the treaty should include agreements about how artificial general intelligence should be controlled once it is created.  

There are arguments from other philosophers that it will not be possible for humans to control artificial general intelligence machines which are superior to humans. Any international treaty on AI ethics should take into account artificial general intelligence as being something that will happen in the future. I think we cannot know at this point in time how artificial general intelligence will be accepted in our societies and so any international treaty should make provisions for future generations of all humans to have detailed training about artificial intelligence ethics.  It should also make provision for renewal and update of the international treaty at regular intervals.