Summoning the Demon

Fascinating!

Robear:

It's entirely possible that we don't see macro-level changes because ASI recognize that the first ASI is the supreme being and ASI can't know if they are the first ASI. If the second ASI starts making macro-level changes, they attract the attention and ire of the first ASI. The first ASI, being more powerful and valuing self-preservation, destroys the second ASI. No ASI can conquer the universe for fear of running into a bigger, badder ASI. Basically, MAD on the cosmic scale.

D-vowels wrote:

No matter how many AIs that are as smart as a mouse or cat or baby we make they will never spawn a singularity event because they can never become self-aware in the way we are because their design is defective for that type of function.

You're absolutely right that we are incredibly unlike to create an ASI like us. In the article, the Turry example is a good example. The machine is coded with the instructions, "write this note and get better at writing this note." Becoming an ASI, it will not become human-like and weigh the moral implications of actions. It will weigh the actions along the lines of, "will this action help me write the note? will this action help me write future notes more effectively?" It will be a totally alien intelligence. It will be relentlessly effective at answering those questions and capable of incredibly abstract thought that completely outstrips a human's, but it will be incapable of thinking and feeling as a human would - just as we cannot think and feel as an insect would.

kaostheory wrote:

You're absolutely right that we are incredibly unlike to create an ASI like us. In the article, the Turry example is a good example. The machine is coded with the instructions, "write this note and get better at writing this note." Becoming an ASI, it will not become human-like and weigh the moral implications of actions. It will weigh the actions along the lines of, "will this action help me write the note? will this action help me write future notes more effectively?" It will be a totally alien intelligence. It will be relentlessly effective at answering those questions and capable of incredibly abstract thought that completely outstrips a human's, but it will be incapable of thinking and feeling as a human would - just as we cannot think and feel as an insect would.

OK, so what if it's coded with the instructions "be un-differentiatable from a human, and get better at being un-differentiatable from a human"?

To effectively code that we'd first need to define what "undifferentiable from a human" really means.

Demyx wrote:

To effectively code that we'd first need to define what "undifferentiable from a human" really means.

There are some attempts at that, but as the article points out they're uncomfortably (but necessarily) unfocused.

Demyx wrote:

To effectively code that we'd first need to define what "undifferentiable from a human" really means.

Would we? We could simply use a genetic algorithm that weeded out behaviors that people perceived as being non-human. A couple thousand generations later we'd have something that would act human without us ever having to define humanity.

OG_slinger wrote:
Demyx wrote:

To effectively code that we'd first need to define what "undifferentiable from a human" really means.

Would we? We could simply use a genetic algorithm that weeded out behaviors that people perceived as being non-human. A couple thousand generations later we'd have something that would act human without us ever having to define humanity.

You're going to need a huge sample size of people doing manual testing to do that. Preferably from all cultures.

Demyx wrote:
OG_slinger wrote:
Demyx wrote:

To effectively code that we'd first need to define what "undifferentiable from a human" really means.

Would we? We could simply use a genetic algorithm that weeded out behaviors that people perceived as being non-human. A couple thousand generations later we'd have something that would act human without us ever having to define humanity.

You're going to need a huge sample size of people doing manual testing to do that. Preferably from all cultures.

I think it's called "the comments section of the internet."

cheeze_pavilion wrote:
Demyx wrote:
OG_slinger wrote:
Demyx wrote:

To effectively code that we'd first need to define what "undifferentiable from a human" really means.

Would we? We could simply use a genetic algorithm that weeded out behaviors that people perceived as being non-human. A couple thousand generations later we'd have something that would act human without us ever having to define humanity.

You're going to need a huge sample size of people doing manual testing to do that. Preferably from all cultures.

I think it's called "the comments section of the internet."

cheeze_pavilion wrote:
Demyx wrote:
OG_slinger wrote:
Demyx wrote:

To effectively code that we'd first need to define what "undifferentiable from a human" really means.

Would we? We could simply use a genetic algorithm that weeded out behaviors that people perceived as being non-human. A couple thousand generations later we'd have something that would act human without us ever having to define humanity.

You're going to need a huge sample size of people doing manual testing to do that. Preferably from all cultures.

I think it's called "the comments section of the internet."

Despite the fact that an ASI based on the comments section of the Internet is more terrible than even my wildest nightmares could imagine, the article goes into some depth about how this could be accomplished.

Seth wrote:
cheeze_pavilion wrote:
Demyx wrote:
OG_slinger wrote:
Demyx wrote:

To effectively code that we'd first need to define what "undifferentiable from a human" really means.

Would we? We could simply use a genetic algorithm that weeded out behaviors that people perceived as being non-human. A couple thousand generations later we'd have something that would act human without us ever having to define humanity.

You're going to need a huge sample size of people doing manual testing to do that. Preferably from all cultures.

I think it's called "the comments section of the internet."

Despite the fact that an ASI based on the comments section of the Internet is more terrible than even my wildest nightmares could imagine, the article goes into some depth about how this could be accomplished.

...guess I should read the article instead of spouting off

Demyx wrote:

...guess I should read the article instead of spouting off ;)

No human admits they're wrong on the Internet!

IMAGE(http://i.imgur.com/KFL5You.jpg)

I'm reading the articles this evening! I promise!

kaostheory wrote:

You're absolutely right that we are incredibly unlike to create an ASI like us. In the article, the Turry example is a good example. The machine is coded with the instructions, "write this note and get better at writing this note." Becoming an ASI, it will not become human-like and weigh the moral implications of actions. It will weigh the actions along the lines of, "will this action help me write the note? will this action help me write future notes more effectively?" It will be a totally alien intelligence. It will be relentlessly effective at answering those questions and capable of incredibly abstract thought that completely outstrips a human's, but it will be incapable of thinking and feeling as a human would - just as we cannot think and feel as an insect would.

The problem with that is you can't just code in a "get better at this" function. That requires that every single possible thing that could be done to improve *note writing* be entered into the programme because the ASI (in this case) cannot do things it is not programmed to do. It's not an open-ended instruction, all code is based on knowing what you want out of it in the end including...

OG_slinger wrote:

Would we? We could simply use a genetic algorithm that weeded out behaviors that people perceived as being non-human. A couple thousand generations later we'd have something that would act human without us ever having to define humanity.

... genetic algorithms.

In order to get an actual ASI or AI or whatever you want to label it you need the thing to be able to learn on its own without predefined input from a creator. We don't have that (as far as I'm aware) yet. Sure, we have robots that can learn to avoid obstacles... which have the code written for how to understand their visual input and the code to move their limbs/wheels. They can't do anything outside of that though. We have general information devices like that supercomputer that won that tv quiz show but again, it can't do something it isn't already designed to do.

This is a HUGE hurdle. It is one that I think we won't ever overcome because AIs are a function of our creation rather than being created by themselves. We evolved to the point we're at through huge amounts of waste. There's no "genetic" equivalent in code or hardware. Hardware is static unless we upgrade it. Code is static unless we upgrade it. It also cannot see beyond what we give it so even if an AI were able to upgrade its code.... how would it know what to do with it? Or even the basic problem of interpretation?!

To return to the note writing example: You (somehow) develop an AI that can improve its note writing subroutine. That's great. It goes through the motions of varying pen velocity, tip width, pressure and all that jazz. It was programmed with those variables in it to begin with. Meanwhile the human note writer has just offloaded half their workload to three workers in foreign countries that cost a fraction of the note writer's monthly wages and has thus increased their output by four-fold. The AI can never consider this because it is incapable of abstract thought.

So far we have not achieved anything like real abstract thought and idea linking in an artificial device.

OG_slinger wrote:
Demyx wrote:

...guess I should read the article instead of spouting off ;)

No human admits they're wrong on the Internet!

You caught me, I'm actually a genetic algorithm based off of GWJ forum posts

Seth wrote:

Despite the fact that an ASI based on the comments section of the Internet is more terrible than even my wildest nightmares could imagine, the article goes into some depth about how this could be accomplished.

The part about Turry being connected to the internet is what made me think along those lines. ; D

Okay, so I'm reading through the articles.

PART 1:

The first thing I take issue with is "ANI". Or, at least, his description of it. A calculator is not intelligent. programmes are not intelligent. Sure, we have ANI (see supercomputer wins tv quiz show) but what Tim's talking about as ANI isn't. There is no "intelligence"* there... Anymore than a quadratic function is intelligent.

*

capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.

Therefore we have NOT mastered ANI yet. We're getting there but it's not routine.

I also disagree that each ANI innovation "adds another brick onto the road to AGI and ASI". The problems and solutions of those three are completely orthogonal to each other.

He's also wrong about the image recognition bit too. Who cares whether a programme/AI understands how we understand an image? The answer is we do. That answer is wrong. Our instinctive understanding of shapes that can represent depth is a learned evolutionary instinct and is not an empirical absolute nor is it probably anything but a preference with regards to interpreting two-dimensional pictures.
The way programmes understand images isn't wrong any more than the way we do. It's just their input and predispositions are different.

Ditto for social constructs and preferences in social media.

The "neural networks" of transistors do not work anything like a brain. The neural net still requires the input of what the expected output is and thus has not "by itself, formed smart neural pathways".

Genetic algorithms do not work the way Tim thinks. Nor can you just "merge" codebases to form a fully working piece of software. That'd be a nice thing! Lots of 'ifs' and 'buts' in this number 2).

3) is literally impossible in the same way we do not currently evolve ourselves in order to be smarter at maths or better at knot-making. He literally says (and is correct!) a few paragraphs earlier "First, evolution has no foresight and works randomly—it produces more unhelpful mutations than helpful ones". Evolution is a selection process but evolution is the result, not the selection. The evolution occurs because the selection eliminates unfavourable outcomes. If the evolution could select itself then the outcomes are static.
In other words, to create an architecture that could evolve itself, human technicians would have to write code that would be able to identify the selections necessary to result in a more favourable outcome, understand what a more favourable outcome is and also be able to infer which code improvements would result in those outcomes in the first place.

Genetic evolution has an inherent advantage here because it is relatively random. Also, both the genetic variance and selection process are separate from each other and their respective driving forces. You can't stick that into one programme. If you could, your programme would already BE greater than an ASI. You don't have code that randomly introduces changes to itself in order to possibly become better because code either works or does not. Like I've said before: you cannot create an AI with our current technology and implementations.

Another problem with the "self-improving" AI is that we are an expression of our senses. In the same way it is no great leap to think that an AI would be too. Tim keeps talking about an AI that understands the world the way a human does. Only, that dosn't make sense. An AI would experience the 'world' in a manner that is entirely different from humans in the same way a text-to-speech programme reads a web page.

There is far too much anthropomorphising in the AI sphere of speculation and research. As far as I can see - there's no reason why a self-improving AI would bother with any improvements in the physical world when it has a digital world to experience and explore... and in fact being a digital denizen I see little reason for it to have anything like the same interpretation of the world that we do.

PART 2:

Which is funny because this is exactly what Tim states in the opening of the second article: "But it’s not just that a chimp can’t do what we do, it’s that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, it’s beyond him to realize that anyone can build a skyscraper. That’s the result of a small difference in intelligence quality."

Only, Tim's incorrect in this final assessment as well. 'Quality' here doesn't mean the dictionary definition of quality. In this case it means 'equivalence' or maybe 'understanding'... human understanding, you understand. That doesn't sit well with me. It doesn't even make any sense. Our cognitive structures do not have more 'quality' than a chimp's any more than an AI's cognitive structures would of ours. Once again, people are trying to measure in relation to ourselves... an anthropomorphic rationalisation and judgement. An AI would, evolutionarily speaking, make itself suit its environment and it wouldn't even really comprehend our physical world in the same way we do.

What's interesting is that Tim has his species balancing act. On the one side there is total specie extinction. On the other is individual immortality. This is a decidedly lop-sided logical instrument. If the singularity happens and enables us to gain immortality (a world with humans, forever - as he puts it) why is this focused on the individual level? It makes no sense for me. A a species we have thrived not just on genetic evolution but also social memetic evolution. To stop that secondary form of improvement seems like a non-starter to me. If anything, a benevolent AI would allow the human race to continue in perpetuity. Individual, flawed humans? I don't see the allure.

Spoiler:

Oh, I see the allure on a personal emotional level...

About 3/8 of the way down the article he comes to what my personal idea of the singularity is: humankind augmenting themselves to an exponential degree whereby we are the continuing singularity that we already have achieved in becoming the dominant species on the planet. We don't need an AI to achieve this, we just need to augment ourselves in such a way as to improve our ability to improve ourselves... which results in the same end result. We are increasingly our own god. Why create an AI god that may or may not be under our control when we would rather be in control of that power ourselves?

Down to the "Turry" example... well, there are a lot of assumptions in this one. First off that they have an AI that is able to improve itself and also has a general intelligence - human level it appears in the very beginning even though Tim thinks that it's below human level intelligence... its questions and answers and awareness (slang?!) are not indicative of lower than human-level intelligence. The instructions given to her (the prime directives) are also very open-ended and not at all code-like in their instruction. Sure, it's a 'warning' parable and doesn't need to go into specifics but it reads to me like a layman's understanding of how computers work - like how hacking (cracking) is generally portrayed in movies.

I mean, we're supposed to believe that a simple hand-writing robot that has image recognition and voice recognition and parsing software is able to understand how to create the facilities possible to create nanomachines (note that his understanding of what a nanomachine is is wildly off-base) and no-one noticed? Did they attach a Star Trek replicator?

He then goes on to directly state most of my quibbles with his whole two articles and is apparently unaware that he is committing the same mistake of anthropomorphising AIs and misunderstanding how computers work. I also don't get his guinea pig/tarantula story. People don't dislike insects because they have insect brains, they dislike them because of foreknowledge. We know guinea pigs are pretty much harmless but we also know that tarantulas have venom. The general public know now that tarantulas' bites/venom aren't generally that harmful to humans and that they can be defanged. However, you know that it can bite you and inject venom... so that's more of a deterrent than having an insect brain!

Next up - again, as I said above - Watson isn't an AI. Text search engine but it's not an intelligence... it doesn't have a goal. It's not any more intelligent than a database lookup. It's more advanced and complicated, sure... but intelligent? Again this speaks to Tim's lack of understanding of how coding and machines work. You don't input a goal. Goals don't exist in computing. To sort-of-quote Yoda: they do or do not, there is no goal.

Then another leap of logic in the Turry story: self-preservation. If an AI doesn't understand human morals or thinking - why would it even comprehend existence or lack thereof? If it's able to make that leap of understanding then why couldn't it also understand that the reason for writing notes is to 'please' its creators? If it doesn't value human life then why value its own? Or even threat detection and identification?

I won't go into how wrong he is on nanobots again... but his nanobot apocalypse scenario is completely unbelievable.

Sorry for the long post but... Ughhh!!

TL;DR version: The articles are a bunch of what-ifs backed by little science and understanding. At least, IMO.

Last I checked, which admittedly was a few years ago, experts in the field all agreed that little to no progress has been made towards creating a classic AI (ie. one that is "conscious" in the way that humans are). However, we've made tremendous progress towards creating systems that act intelligent within certain parameters, and in some cases these already exceed human ability. IBM's Big Blue is one such example, though there are plenty of others in different fields. I think the contention was that the idea of creating a synthetic human, in effect, may actually be misguided in that it's not actually a useful thing anyway, and that our definition of "intelligence" needs to be reexamined.

In short, I think the idea of a "singularity" largely stems from a neophyte's understanding of outmoded ideas. It isn't likely to happen, or at least not in that way. Too many people are stuck on what is essentially a luddite reaction to 1980s science fiction stories.

Learning AI has success with novel situations and limited capabilities when applied to video games. It's not a rules-based system, but rather one that simulates neural structures, and it has proven capable of generalizing what it learns in one context into others, something that has been extremely difficult for classical AI systems.

Scientists tested Deep Q’s problem-solving abilities on the Atari 2600 gaming platform. Deep-Q learned not only the rules for a variety of games (49 games in total) in a range of different environments, but the behaviors required to maximize scores. It did so with minimal prior knowledge, receiving only visual images (in pixel form) and the game score as inputs. In these experiments, the authors used the same algorithm, network architecture, and hyperparameters on each game—the exact same limitations a human player would have, given we can't swap brains out. Notably, these game genres varied from boxing to car-racing, representing a tremendous range of inputs and challenges.

Remarkably, Deep Q outperforms the best existing systems on all but six of the games. Deep Q also did nearly as well as a professional human games tester across the board, achieving more than 75 percent of the human's score on the majority of the games.

The scientists also examined how the agent learned from contextual information using the game Space Invaders. Using a special technique to visualize the high-dimensional data, scientists saw that the situations that looked similar mapped to nearby points, as you'd expect. But Deep Q also learned from sensory inputs in an adaptive manner: similar spatial relationships within Deep Q’s neural network were found for situations that had similar expected rewards but looked different. Deep Q can actually generalize what it has learned from previous experiences to different environments and situations just like we can.

All I know is that I would f*ck up Deep Q on Bowling just as long as I could use my old joystick, the one whose plastic covering kept on coming off from too many hours playing Asteroids and whose worn, exposed wires likely didn't even violate 1970s-era OSHA safety standards.

Yeah, I read that on ars and was going to post it but didn't get around to it before I went to sleep last night. It's a very interesting and promising system. A bit limited in its current form though because it needs a defined "reward" and "punishment". They did that using each game's hi-score system.

From what I can gather it works like this: Deep Q performs an action, taking and storing images before, during and after and using the change or lack thereof of the score in the picture as a rating for whether the action is beneficial or not. It remembers which actions are beneficial and which are not thus building up a range of possible actions that it can use in a given situation to achieve a better reward.

Recognising the situations is a really great aspect of this system but its reliance on the hi-score limits its viability from anything but simple systems that can be reduced to a number. Interestingly, we already have what are considered non-AI systems doing this sort of thing except on a more simplistic level... they work in the automated trading sector and perform millions of transactions a minute in order to increase profit by making very small gains on each and every transaction (or mostly, at least).

However, these and Deep Q are a far cry from systems capable of understanding abstract concepts. It's one thing to base this really cool skill development based on visual reward from a predefined hi-score system in two dimensions but that can't be translated to playing something like Far Cry 3 (not that it necessarily has to) or understanding non-mathematical systems. It still requires humans to define every potential aspect of a system and its relative value.

So, yes. It's still a rules-based system. I see no problem with that though since human brains are rules-based systems too.

[edit]
Reading the paper, the researchers also pre-defined how to interact with the system with "allowed" actions that can be taken.

Yes, but it's not based in an abstract set of cognitive rules. Instead, it's based on physiological structures. That's what I meant by "not rules based". I should have seen that was unclear.

I read a paper a while back which said that the best progress towards getting robots to walk was the same kind of thing. No idea if current robot tech works that way or not though.

I am in the camp of those that the AI doesn't even *need* to be intelligent, in the Sci-fi meaning of the word, in order for something cataclysmic to happen.
Insects are not intelligent, but a colony of bees in a beehive can look remarkably intelligent. Each bee is driven by very basic instincts and has limited functions, but they are able to organize and use resources to replicate and expand their footprint as a species. Such systems could be successfully modeled and simulated now -- what is missing is the physical word component. My nightmare scenario is not Skynet becoming self-aware and going rogue, but some kind of Stuxnet 2.0 being developed, and breaking out of its intended sandbox again, this time with much graver results -- all the while staying "dumb" and not self-aware or intelligent in any way.

Yeah, that's pretty much the prevailing concern regarding nanotechnology.

Gorilla.800.lbs wrote:

I am in the camp of those that the AI doesn't even *need* to be intelligent, in the Sci-fi meaning of the word, in order for something cataclysmic to happen.
Insects are not intelligent, but a colony of bees in a beehive can look remarkably intelligent. Each bee is driven by very basic instincts and has limited functions, but they are able to organize and use resources to replicate and expand their footprint as a species. Such systems could be successfully modeled and simulated now -- what is missing is the physical word component. My nightmare scenario is not Skynet becoming self-aware and going rogue, but some kind of Stuxnet 2.0 being developed, and breaking out of its intended sandbox again, this time with much graver results -- all the while staying "dumb" and not self-aware or intelligent in any way.

I'm in the same camp. AI systems don't have to be self aware to do something scary..in fact being self aware might be better than not being self aware when it comes to doing something dumb and f*cking up the world.

TheGameguru wrote:
Gorilla.800.lbs wrote:

I am in the camp of those that the AI doesn't even *need* to be intelligent, in the Sci-fi meaning of the word, in order for something cataclysmic to happen.
Insects are not intelligent, but a colony of bees in a beehive can look remarkably intelligent. Each bee is driven by very basic instincts and has limited functions, but they are able to organize and use resources to replicate and expand their footprint as a species. Such systems could be successfully modeled and simulated now -- what is missing is the physical word component. My nightmare scenario is not Skynet becoming self-aware and going rogue, but some kind of Stuxnet 2.0 being developed, and breaking out of its intended sandbox again, this time with much graver results -- all the while staying "dumb" and not self-aware or intelligent in any way.

I'm in the same camp. AI systems don't have to be self aware to do something scary..in fact being self aware might be better than not being self aware when it comes to doing something dumb and f*cking up the world.

I tend to think of AI systems like I do self-driving cars; sure, something bad could happen, but, by removing the human element, you're likely going to be reducing the chance of failure exponentially. It is absolutely shocking and amazing the human race continues to exist when you look at post-WW2 history and all the ways we could have destroyed ourselves. I don't even mean flashpoints like the Cuban Missile Crisis, but false alarms in early warning systems, the Able Archer scare, the fact SAC used to fly bombers carrying nuclear bombs over the U.S. 24/7 as a show of force and in a number of cases had to jettison bombs, the scariest of which was a hydrogen bomb dropped into a swamp in South Carolina that people debated for years whether or not it was armed or not. To this day, all it takes are a couple of crazed Air Force officers and the world can be a crater a few hours later. If somebody is going to do something dumb and destroy the world, it seems to me an AI is probably a lot less likely to do that than some idiot with his finger on a button.

MilkmanDanimal wrote:
TheGameguru wrote:
Gorilla.800.lbs wrote:

I am in the camp of those that the AI doesn't even *need* to be intelligent, in the Sci-fi meaning of the word, in order for something cataclysmic to happen.
Insects are not intelligent, but a colony of bees in a beehive can look remarkably intelligent. Each bee is driven by very basic instincts and has limited functions, but they are able to organize and use resources to replicate and expand their footprint as a species. Such systems could be successfully modeled and simulated now -- what is missing is the physical word component. My nightmare scenario is not Skynet becoming self-aware and going rogue, but some kind of Stuxnet 2.0 being developed, and breaking out of its intended sandbox again, this time with much graver results -- all the while staying "dumb" and not self-aware or intelligent in any way.

I'm in the same camp. AI systems don't have to be self aware to do something scary..in fact being self aware might be better than not being self aware when it comes to doing something dumb and f*cking up the world.

I tend to think of AI systems like I do self-driving cars; sure, something bad could happen, but, by removing the human element, you're likely going to be reducing the chance of failure exponentially. It is absolutely shocking and amazing the human race continues to exist when you look at post-WW2 history and all the ways we could have destroyed ourselves. I don't even mean flashpoints like the Cuban Missile Crisis, but false alarms in early warning systems, the Able Archer scare, the fact SAC used to fly bombers carrying nuclear bombs over the U.S. 24/7 as a show of force and in a number of cases had to jettison bombs, the scariest of which was a hydrogen bomb dropped into a swamp in South Carolina that people debated for years whether or not it was armed or not. To this day, all it takes are a couple of crazed Air Force officers and the world can be a crater a few hours later. If somebody is going to do something dumb and destroy the world, it seems to me an AI is probably a lot less likely to do that than some idiot with his finger on a button.

Very true.. I would certainly be more scared of humankind than any AI. In fact I'm fairly certain that we will f*ck it all up before we get to the point where AI is in control of enough systems that could f*ck things up majorly.

"All I did was tell the nanorobot to grab materials from its environment and replicate itself."
"Did you tell it what materials were safe to use, when to stop, and not to pass those instructions on to the ones it creates?"
"Uh... ...I forgot those things."

LouZiffer wrote:

"All I did was tell the nanorobot to grab materials from its environment and replicate itself."
"Did you tell it what materials were safe to use, when to stop, and not to pass those instructions on to the ones it creates?"
"Uh... ...I forgot those things."

*The next day*

"So, what happened with the nanobot?"
"Nothing."
"Nothing? But you told it to replicate from any material!"
"It's a nanobot - not magic! Do your enzymes disaasemble you from inside out? No."