Now, I have certainly heard of the Singularity before and had seen the headlines where Musk warned that we might be, "Summoning the Demon," with our research into AI, but these two articles really do a compelling job of laying out the potential future of AI and lay out the possible benefits and risk.
I have long considered myself a sort of techno-optimist and and what little thought I had previously given to AI, I kind of thought along the lines of, "the singularity sounds like it could fix a lot of our ills." But after reading the linked articles, it is very easy to imagine something like the described unfriendly AI named Turry being the end of humanity. If nothing else, these articles have probably ruined rogue AI Sci-Fi for me; as a species, I don't see us fighting back against rogue AI as in the Terminator or Matrix.
A planet-wide AI also might explain why we haven't detected other alien intelligences; perhaps the universe is filled with planet AIs who are busy chatting away (or printing thank you notes, a la Turry) and are communicating in some form that we can't even comprehend.
I don't really even know what debate there can be had on the subject. I certainly don't see us even considering stopping AI research and it seems doubtful that a system could be put into place that could eliminate the risk of unfriendly AI. Which seems to leave us with:
Either Super AI is not possible, in which case no worries;
Or it is, and we get lucky and thread the needle to an "oracle-like," friendly singularity;
Or we are screwed.