I recently listened to a debate between Nick Bostrom and three AI researchers about the risks that come with increasingly able artificial intelligence and how to deal with them. There was disagreement about whether the journey to superintelligence will be a gradual transition or an exponentional explosion.  

Nick Bostrom argues that once artificial general intelligence is with us, it will have the ability to improve upon itself and so therefore will rapidly advance in an exponential explosion to reach superintelligence. Others in the debate argued that there is no evidence in other technologies for such an exponential explosion and so therefore there will most likely be a gradual transition.

It was also argued that because there is no road map set out at present for the journey to artificial general intelligence, that it will be unlikely for this to be reached quickly. It was argued that specialist artificial intelligence as we have now, will improve significantly before artificial general intelligence is invented. 

The members of the panel also disagreed about whether superintelligence will or will not have a "will to power". Nick Bostrom argues that superintelligence will have such as will and is likely therefore to take over the universe! The other panel members were not convinced this will happen. However, superintelligence is not likely to happen until far into the future...