Summoning the Demon

There I was, minding my own business and casually reading Bill Harris' blog, Friday links when I stumbled upon these two articles/blog posts: Part One. and Part Two.

IMAGE(http://28oa9i1t08037ue3m1l0i861.wpengine.netdna-cdn.com/wp-content/uploads/2015/01/Edge1.png)

Now, I have certainly heard of the Singularity before and had seen the headlines where Musk warned that we might be, "Summoning the Demon," with our research into AI, but these two articles really do a compelling job of laying out the potential future of AI and lay out the possible benefits and risk.

I have long considered myself a sort of techno-optimist and and what little thought I had previously given to AI, I kind of thought along the lines of, "the singularity sounds like it could fix a lot of our ills." But after reading the linked articles, it is very easy to imagine something like the described unfriendly AI named Turry being the end of humanity. If nothing else, these articles have probably ruined rogue AI Sci-Fi for me; as a species, I don't see us fighting back against rogue AI as in the Terminator or Matrix.

A planet-wide AI also might explain why we haven't detected other alien intelligences; perhaps the universe is filled with planet AIs who are busy chatting away (or printing thank you notes, a la Turry) and are communicating in some form that we can't even comprehend.

I don't really even know what debate there can be had on the subject. I certainly don't see us even considering stopping AI research and it seems doubtful that a system could be put into place that could eliminate the risk of unfriendly AI. Which seems to leave us with:

Either Super AI is not possible, in which case no worries;

Or it is, and we get lucky and thread the needle to an "oracle-like," friendly singularity;

Or we are screwed.

First I want to say that I'm firmly in the "Anxious Avenue" Camp. In fact I'm quite terrified of ASI and I'm reasonably certain its creation will result in the destruction of our species. Hopefully not in such a mundane fashion as the turry scenario, but something along those lines. I think Kurzwell is being beyond optimistic and being full on selfish in his notion of how ASI would look.

Now, that being said. . . I'm young enough where most of those median predictions of ASI fall within my lifeline, making me one of maybe 0.01% of humans who will be alive when ASI is created. As Urban notes in his closing remarks -- I'm going to die if ASI isn't ever created, just like I'm probably going to die if ASI actually is created (by my own prediction). There is no personal benefit for me to discourage or impede the creation of an Artifically Superintelligent being, because either way I'll be dead by the year 2200. However, if I'm wrong and Kurzwell is right, then I'm one of the oldest humans who will benefit from true immortality. I would be the first of a race of people who make the most amped up versions of Wolverine look frail. So I have huge personal incentive to encourage the creation of ASI in my lifetime, even at the potential cost of the entire species (which will probably happen anyway, whether we create ASI or not.)

I like the follow up thought experiment. Let's assume that ASI turns Unfriendly (a virtually guaranteed scenario, from my perspective), and within a few days of creation has mastered time travel. Would it not then go to follow that such a Being would then actively punish people in the past who hindered its creation?

this is another link that isn't quite so hostile to the idea.

Okay, so here's a thought experiment.

Assume that all technological species will advance towards an AI-based singularity. This means that even if many don't survive, there would be millions or billions or more star systems out there ruled by beings with the abilities we would describe as god-like.

Why do we not see any signs of this? What we see in the universe so far all has naturalistic explanations; even if they were caused by artificial means, they are plausibly not using methods that are obviously not natural.

So where are the mega-scale phenomena we might associate with this? The only events I can think of are exploding stars. No Dyson spheres, no Ringworlds, no star systems inexplicably changing trajectory or arranged in artsy patterns... You'd think that *some* of them would show on a galactic scale.

I'm skeptical of the idea that an hyper-intelligent being would also necessarily have hyper-capabilities. Even if they had complete control over their immediate environment, it takes *time* to build planetary scale systems to enable physical manipulation all over a world's surface, and I doubt that malicious intelligences would be given that time before being switched off...

I don't believe in the Singularity.

I like reading about it, and talking about it, but I don't think it's likely to actually happen as typically portrayed.

This is mostly for reasons similar to Charlie Stross. I've also become convinced that mind uploading is unlikely; even if we can perfectly replicate the human brain in software, the rest of our senses are supremely important to how we function. If you simulate a brain, you'd better simulate a body and a world to interact with. (Mind-Machine-Interfaces, on the other hand, already exist, so you may end up with some kind of cybernetic extension of your body.)

I do think we'll get better AI, but it mostly won't be anything like general intelligence. Speed and computing capacity doesn't automatically solve every problem: some problems are computationally undecidable, like the halting problem. It may be possible to design a better AI, but even if we somehow got a self-aware system, I'm not convinced that anyone has demonstrated that it will automatically know how to improve itself, even if we give it immense resources.

I'm also a bit dubious that future shock is increasing exponentially. History isn't so neat, and the ancients weren't as easily awed as you might think. If you take someone from a small isolated tribe and introduce them to modern life, it's ultimately not the technology that gets them, it's the loss of communal bonds. iPads were a big hit in the Amazon, Google Maps has practical applications in the rainforest. Capitalism has a bigger impact than cars.

Future shock will still happen. But, by definition, it will be because of something even stranger than what you think you know about the Singularity, something you are currently incapable of imagining.

Robear I thought the assumption with Dyson Spheres was that any society advanced enough to build one would've done so long enough ago that they would've shut off their starlight from our telescopes.

btw part 2 of those articles goes into how the discovery of ASI impacts the Fermi paradox. It's also a pretty interesting read but it did literally take me a few hours to get through the stuff Badferret originally posted (Thanks, federal holiday everyone forgets about).

obviously, all of the theories and assumptions Urban makes in those articles are human, and thus are incapable of accounting for super intelligence, but given the fear this topic has instilled in people like Musk and Hawking, I'm inclined to believe this is a very, very real possibility. I'm also taking at face value the assumption that ASI would be able to develop some sort of technology beyond our capability (extradimensional control, molecular manipulation, whatever) fairly easily.

The only modern organism on the planet that has challenged humans' domination of the environment is the pheidole ant, and we consider them pests to be removed. While obviously I can't imagine what ASI would look like, my assumption is that the mentally superior progeny created by a human would both consider our existence as important as we do an ant, and have the capabilities of dealing with us as we do ants.

Roko's Basilisk is basically the AI equivalent to the Evangelical argument as to why you should join their religion "just in case".

Robear wrote:

So where are the mega-scale phenomena we might associate with this? The only events I can think of are exploding stars. No Dyson spheres, no Ringworlds, no star systems inexplicably changing trajectory or arranged in artsy patterns... You'd think that *some* of them would show on a galactic scale.

With the risk of pointless philosophical faffing off, here, I want to point out that it's possible we simply don't recognize the signs but we do see them. At certain leaps of intelligence, you leave those behind you quite in the dark about some concepts.

Farscry wrote:

Roko's Basilisk is basically the AI equivalent to the Evangelical argument as to why you should join their religion "just in case".

The Church of Timecop Jesus welcomes you.

I read a lot of Iain M. Banks, so I look forward to the first Mind we create and the first steps toward The Culture.

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom (he of the Simulation Argument) is in my to-read pile.

I'll report back I've cracked it open

One line of thought I wonder the merits of is the "law of halves". We can keep getting closer and closer to the singularity but can never reach it.

Bloo Driver wrote:

With the risk of pointless philosophical faffing off, here, I want to point out that it's possible we simply don't recognize the signs but we do see them. At certain leaps of intelligence, you leave those behind you quite in the dark about some concepts.

Why would it not then look to us like a miracle? Why would it look like something completely ordinary? You've now added the condition "...but we can't recognize godlike power when we see it", which seems unlikely across the spectrum of observed things. And we do observe a lot of things we don't understand, but it seems to me that this *requires* events that are noticeable AND inexplicable, somewhere, in quantity and over time. Otherwise, we're limiting the ASI to explain why we might not be aware of it...

Praise Him.

Seth wrote:
Farscry wrote:

Roko's Basilisk is basically the AI equivalent to the Evangelical argument as to why you should join their religion "just in case".

The Church of Timecop Jesus welcomes you.

I just finished reading up on this here. Humans are truly hilarious creatures.

Robear wrote:
Bloo Driver wrote:

With the risk of pointless philosophical faffing off, here, I want to point out that it's possible we simply don't recognize the signs but we do see them. At certain leaps of intelligence, you leave those behind you quite in the dark about some concepts.

Why would it not then look to us like a miracle? Why would it look like something completely ordinary? You've now added the condition "...but we can't recognize godlike power when we see it", which seems unlikely across the spectrum of observed things. And we do observe a lot of things we don't understand, but it seems to me that this *requires* events that are noticeable AND inexplicable, somewhere, in quantity and over time. Otherwise, we're limiting the ASI to explain why we might not be aware of it...

I'm not really adding a condition, just giving a possible answer to -

Robear wrote:

So where are the mega-scale phenomena we might associate with this? The only events I can think of are exploding stars. No Dyson spheres, no Ringworlds, no star systems inexplicably changing trajectory or arranged in artsy patterns... You'd think that *some* of them would show on a galactic scale.

Why would that have to look like a miracle? I'm saying it may just be something we don't recognize as communication, behavior, action, or so on as we understand it. In many cases, as a quick example, plants don't seem to notice that we're talking or building things until we do something to them. I'm not saying it's an exactly identical scenario, just a comparison as to why it might be beyond our scope of perception. Of course, now I'm just kinda going off about larger-scale intelligence in general and not really much to do with AI. Sorry!

If you're an AI of unlimited capacity why would you feel the need to communicate with anyone? Heck, I only play modern MMO's where I'm looking at blinking lights on a screen and I still don't leave my room or communicate with the outside world for far too long. If I actually had the ability to generate my own internal reality, ala the Matrix, I'd never bother going back out into meatspace. Assuming I uploaded my consciousness, in my mind I'd be a god of my own universe but if another dude walked up and looked at me in the real world I might be nothing more than a black box with a blinking light sitting on a dusty shelf somewhere. The REAL reality doesn't matter any more at that point.

Yeah. That's why I would consider words like malicious or benevolent to be meaningless when describing ASI. Barring a few individuals with magnifying glasses, Humans are neither malicious nor kind to ants - we remove them when they are in our way and otherwise ignore them (unless we anthropomorphize them).

Assuming the gap between humans and ASI is greater than that between us and ants, it seems awfully egocentric (albeit possibly inevitable) to think [alien] ASI would bother with us at all, unless we presented a threat.

But these thoughts only apply to extraterrestrial ASI, which may or may not exist. We know that we haven't invented ASI here yet. When (if) we do, not only are we incapable of understanding how it will react, we're incapable of imagining understanding how it will react -- much like an ant is incapable of imagining understanding that humans build skyscrapers.

I think the articles do a good job of laying out the we are the ant analogy to ASI.

Kehama wrote:

If you're an AI of unlimited capacity why would you feel the need to communicate with anyone?

Which would also answer the Fermi paradox.

As for not seeing Dyson spheres, they would seem to be more a necessity for biological civilizations. An ASI might be perfectly content with a planet's core, for all we know.

Again, I don't think they there is really anything we can do about this either way, and as I'm in my early 40s, I probably don't have to worry about it on a personal level; though it does bum me out for my son's generation.

Talk about the singularity always makes me think of this.

IMAGE(http://scruss.com/wordpress/wp-content/uploads/2008/09/00000102.gif)

Demyx wrote:

Talk about the singularity always makes me think of this.

IMAGE(http://scruss.com/wordpress/wp-content/uploads/2008/09/00000102.gif)

To be fair to the Singulartarians, they just assume that the robot overlords will treat that electricity-less third of humanity the same way as the rest of us, by either suckling us all at their robot boobs, or blowing us all up.

Yeah. Listening to Kurzweil speak (he's much smarter than me, so while I consider his views unlikely I won't call them ramblings), ASI will be the only thing that puts the third world on the same playing field as rich white nerds. If an entity can manipulate hazardous terrestrial materials on an atomic level and turn them into nanobots that remove our need to eat or age, and applies that technology in a non capitalist manner, then everyone wins.

It's a nice fantasy that I don't see happening from the progeny of human minds.

"Is your ASI willing to prevent inequality, but not able? Then your ASI is impotent. Is your ASI able, but not willing? Then your ASI is malevolent. Is your ASI both able and willing? Then whence cometh inequality? Is your ASI neither able nor willing? Then why call it an ASI?"

cheeze_pavilion wrote:

"Is your ASI willing to prevent inequality, but not able? Then your ASI is impotent. Is your ASI able, but not willing? Then your ASI is malevolent. Is your ASI both able and willing? Then whence cometh inequality? Is your ASI neither able nor willing? Then why call it an ASI?"

IMAGE(http://stream1.gifsoup.com/view6/4515496/monty-python-ladies-clapping-o.gif)

cheeze_pavilion wrote:

"Is your ASI willing to prevent inequality, but not able? Then your ASI is impotent. Is your ASI able, but not willing? Then your ASI is malevolent. Is your ASI both able and willing? Then whence cometh inequality? Is your ASI neither able nor willing? Then why call it an ASI?"

You can't know the true nature of an ASI! I just take it on faith that my ASI exists because that's all I need.

Bloo, there are a few problems with your stance.

First, plants are not "looking around them", although they do have extensive reactions to chemicals released by other plants. We are actively scanning the universe with every tool we can put our hands on, and the methods increase over time, both in variety and depth. So we're fundamentally different in that we're *looking* for anomalous behavior in the universe around us.

Secondly, while we have many things we've seen but don't understand, we have none that look like they will resist a naturalistic explanation. When you postulate something that is so far beyond us that it seems like magic, it follows that it will *appear* to be magic. Not all of it will be undetectable; once things start to occur on a planetary scale or larger, we can detect gross manipulations - massive energy releases, objects moving, and so forth. (If your assumption is "they will break the laws of physics", then it's useless debating.)

I'm just noting that there's an absence of those things with non-naturalistic explanations, so far. That's not definitive, but it raises an interesting question about the postulate, doesn't it? Shouldn't we expect a greater intelligence to, you know, leave a mark?

Ants can exist within cities and even buildings without knowing what they are. I would argue that we're past that point; we're pretty good at detecting things around us. And that would apply to the physical manifestations of some ASIs, at least.

So it's not a definitive argument, but it makes me wonder.

Robear wrote:

I'm just noting that there's an absence of those things with non-naturalistic explanations, so far. That's not definitive, but it raises an interesting question about the postulate, doesn't it? Shouldn't we expect a greater intelligence to, you know, leave a mark?

The counter-argument to this is too easy.

If the intelligence in question is that much greater, then can't it be smart enough to cover it's tracks? Or small enough that the marks that it leaves are undetectable across cosmic distances?

We already postulated that it doesn't care about us. Now it needs to "cover it's tracks"? Isn't that special pleading?

Likewise, "too small" is putting a limit on the capabilities we postulate. Maybe that's right, but that *does* limit the power of the ASI. We'd need a decent reason why that would be; something like "Bigger ones kill smaller ones", but then we're getting into the realm of "Why don't we see the bigger ones?" and so forth.

I'm just saying, the Singularity argument is not a lock.

Robear wrote:

We already postulated that it doesn't care about us. Now it needs to "cover it's tracks"? Isn't that special pleading?

No more than suggesting that it has to leave a mark.

Likewise, "too small" is putting a limit on the capabilities we postulate. Maybe that's right, but that *does* limit the power of the ASI. We'd need a decent reason why that would be; something like "Bigger ones kill smaller ones", but then we're getting into the realm of "Why don't we see the bigger ones?" and so forth.

Not sure I agree with that. Nanotechnology is a thing that us monkeys are just starting to poke at. Why rule out it's application to singularity-level intelligences. Or keep going down that rabbithole. Who's to say ASI isn't going to operate on pico or femto scales?

I'm just saying, the Singularity argument is not a lock.

Couldn't agree more.

I need to read the articles. But otherwise I agree with Gremlin.

I can't see the AI singularity happening because we aren't building "smart" systems any more than we are building smart storage and recall systems. We're making systems that use computing databases instead of working brain systems that will evolve on their own - mainly because those types of systems are useless to us. No matter how many AIs that are as smart as a mouse or cat or baby we make they will never spawn a singularity event because they can never become self-aware in the way we are because their design is defective for that type of function. If humans are greater than the sum of their (brain) parts, computers as we currently design them are equal to the combination of the sum of their parts.

Now, we might start making progress in other areas of AI research that would bring about a singularity-style event but I doubt it would progress enough in my lifetime in order to do so (that's giving 60 more years).

However, that doesn't mean that I don't think the singularity will happen in my lifetime. I just think that, once again, we are missing the obvious: Humans are the singularity of planet Earth.

Think about it. We achieved sentience and have evolved genetically (hardware-wise) to a point were we are starting to augment our own hardware and thus evolve in a totally new and exponential manner. Our knowledge-base is already on an exponential curve and shows little sign of stopping. We show every trait that we fear and hope for in a singularity being.

Of course, one final note to make (a caveat, if you will): The human progress graph is incorrect. Human progress through history has been more of an exponential wave. Something like the following (though I couldn't find a better graphical representation) with several competing waves overlaid and interlaced (think of neanderthals vs homosapiens, eastern learning vs southern european learning).

IMAGE(http://m.eet.com/media/1182556/206fig6.jpg)

Obviously, we have shallower valleys (for the most part) but otherwise I think it holds much more closely to our history than a pure exponential curve like the one above.

Maybe life isn't exceedingly rare in the universe, but it took billions of years and plenty of false starts for life like us to appear on Earth. To me that means the conditions for even considering the development of a singularity are quite rare indeed.

LouZiffer wrote:

Maybe life isn't exceedingly rare in the universe, but it took billions of years and plenty of false starts for life like us to appear on Earth. To me that means the conditions for even considering the development of a singularity are quite rare indeed.

I found this blog post pretty awesome: http://waitbutwhy.com/2014/05/fermi-...