Dr. Amon Eden, a PhD in software design and architecture, has given a lot of attention to AI recently. Some of his more recent works include "The Disruptive Potential of Artificial Intelligence" (2015) and "Singularity Hypotheses: A Scientific and Philosophical Assessment" (2013). He's also a part of the Sapience Project, which is a think-tank formed to look at the potential disruptive impact of AI. So, it's safe to say he knows what he is talking about. link
He has recently said "more needs to be to look at the risks of continuing towards an AI world". Though many are excited for cutting edge AI technologies, many academics are wary of what it could mean. Though science fiction has all but beaten the subject of AI to death, the real world has merely cracked the surface.
Oxford Professor Nick Bostrom agrees with Dr. Eden, saying there is the potential that AI may "advance to the point where its goals are not compatible with that of humans".
So what happens when something is no longer compatible? Well...it can't be good. And, even worse, there is little control over what is being done in the advancement of AI, which basically means there are no policies, little no regulations, and little to no government involvement on the subject.
Bostrom gives an example, to help put this argument into context:
"A basic example of this would be that some supercomputer is asked to make people healthier.
"If allowed to make its own decisions, it may decide that we need to be a bit hardier and so it goes into where computing is basing itself now - ‘The Cloud’ - finds all of the central heating controls across the world and turns them all off.
"Now, would it do this to protect us or could it get to the stage where the computer is trying to crush the human race?"
Where do you stand on the battle for/against AI?