Researchers have recently found seven principles of morality used by all societies in order to ensure cooperation and collective problem solving. The seven are “help your family”, “help your group”, return favours”, “be brave”, “defer to superiors”, “divide resources fairly” and “respect others property”.
If these are fundamental as a human moral code across all human societies would it be sensible to use this moral code for groups of Autonomous AI agents ? In the future we will need to design and possibly regulate AI agents according to a defined moral code. As humans ourselves we are perhaps biased towards using our own moral code. But it seems to me that there is no easy direct translation from human moral codes to machine moral codes. And many may argue that human moral codes are flawed since human societies exhibit many negative behaviours.
More research is needed in this field.
Dr Oliver Scott Curry and team from Institute for cognitive and evolutionary anthropology have a new report “is it good to cooperate?”.