Already humans are trusting AI. For example, the US FDA approved the first medical device to diagnose disease without a doctor in May 2018. So we need to engineer AI for accountability as explained by Dr Joanna Bryson in her thought provoking article referenced below. Some (such as Jacob Turner in his recent book "Robot rules: regulating artificial intelligence") argue that AI deployments should be treated as legal entities and there should be a registration system to keep track of these legal entities. Joanna presents an opposite view and argues that human characteristics such as the suffering we feel when we lose status, liberty or property, is a key component in ensuring that AI deployments are safe and made to the highest standards. She argues that these human characteristics would be lost if AI deployments are treated as legal entities. These were interesting points to think about in those small moments between speaking with potential entrants to the legal profession last night at the legal cheek event.
no human should need to trust an AI system, because it is both possible and desirable to engineer AI for accountability