Sci-Fi are we ready for the next step?

Created from the anime thread where we started debating whether or not if people were more accepting of a separated mind / body link thanks to anime like GITS or in more popular culture "The Matrix" i'll quote level a few things to get us going.

Cayne wrote:
Hmmm a good question, i watched the movie many years before i watched the anime... but the anime was fairly approachable now that we live in a post matrix and androids are a sensible construct of our imagination.
.

hbi2k wrote:
Funkenpants wrote:
LarryC wrote:
Cayne:

Good point. Is there a measure of the amount of post-Matrix penetration of the concept of humanity as a consciousness that is distinct from the body? It's highly suggested in the Legion segment of ME3.

How is Western pop culture on this? It's a fairly common conceit in scifi anime.

It's an old enough concept that we're ready for scifi writers to go back the other way and link personality to the body again.

It's hardly a concept that's specific to sci-fi. It's been a formal part of Western philosophical thought since at least Descartes, and of Christian theology since at least Thomas Aquinas. And given how much of Aquinas' philosophy is based on Aristotle, well, you do the math.

And then a follow up. Early sci-fi writers put us in outer space... we've done that... next was virtual space... we've done that (Yes i'm way over simplifying but stay with me) so what's the next realm Sci-fi needs to go? Fantasy has it easy comparatively it seems.

My inquiry was more along the lines of how mainstream virtual space has become. Scifi has been toying with the concept since Asimov's robot series and went full out in The Matrix, but how mainstream is the thinking that humanity could be seen as electronic signals in a drive?

If they're looking for colonists...

In my admittedly limited and anecdotal experience, not very. I've mostly run into two lines of though in my discussions with people. One is that because the human brain is Natural there's something inherently special about it that can never be reproduced in an artificial manner. Which rules out both vitalization and any form of actual AI. The other is the idea of mind-body dualism which seems to preclude vitalization. After all a virtual you wouldn't be you due to lack of soul.

The problem is that neuroscience is showing that there is no consciousness, no self, whatever you want to call it, without the brain. Not that the human brain must be the only platform for self, but that it's a function of biology, not something standalone. We are not imbued or implanted with a consciousness, we instead develop it as the brain develops.

So I'd say that SF writers would not be interested in what turns out to be a naive belief - that there is a self that can be separated from the body and exist outside of it. Much was we'd like that to be true, it seems to be wishful thinking.

Robear wrote:

So I'd say that SF writers would not be interested in what turns out to be a naive belief - that there is a self that can be separated from the body and exist outside of it. Much was we'd like that to be true, it seems to be wishful thinking.

Except if the self is nothing more than a machine then it could be replicated. If there is no self outside of the brain, then at least theoretically an exact copy of that brain would be a second self. And if we could make an AI that replicates brains, we could theoretically make copies of selves that would be indistinguishable from the original.

Robear wrote:
So I'd say that SF writers would not be interested in what turns out to be a naive belief - that there is a self that can be separated from the body and exist outside of it. Much was we'd like that to be true, it seems to be wishful thinking.

Actually, they write award-winning--and totally awesome--books about said naive belief.

Robear wrote:
So I'd say that SF writers would not be interested in what turns out to be a naive belief - that there is a self that can be separated from the body and exist outside of it.

I don't understand. Are you saying this isn't a very common concept in science fiction?

The problem I see is that sci-fi seems to have already gone everywhere it can possibly go. At this point we're really just refining already explored topics. Now if you're asking what the next popular topic in sci-fi is going to be...
I mean, if you start counting individual short stories, older novels, etc sci-fi has already explored the concepts of a singularity, a singularity within a singularity, AIs, human AIs, augments, consciousness uploads, generational overlap due to relativistic travel where time-debts are accrued, human evolution to the point of becoming pure energy, time travel, dimensional travel, alien/human hybrids, cycles of existence, artificial realities... yeah, I'm getting tired just thinking of all of the topics I've read about in various sci-fi stories.

You're overstating the case. Neuroscience is showing that there is no special thing outside of the function of the brain that embodies consciousness. That does not, however, imply that a brain's mechanisms are the only place where consciousness can occur.

Even without removing the brain from the equation, sense of self is not necessarily bound to a body.

The mere fact that we can imagine our consciousness existing outside the bounds of a human body means that it is possible.

Is my sense of self bound more to my body or to my train of thought?

We extend ourselves with tools, and tools like the net allow us to extend our reach dramatically. Right now, the agents available online are fairly limited. We cannot transfer our intentionality into the machine. But let us imagine that we could in fact cause an intelligent agent to become active online that would make all of the same choices we would make if we were online at that moment. It can read messages, summarize them, and post responses.

It's simply a tool, right?

But then, what if we died and that agent went on working... and came across information of our death. Now it embodies the intention of the "original" body, but there is no "real person" it's working for any more. It's capable of making all of the same choices that the "original" would make, but there is no "original" any more.

Now take that to the context of GitS, where the main character is a cyborg. She has a living human brain connected to a machine body. Her brain has access to the sensory and computational abilities of that body in addition to those of the brain. As time passes, which portion of her self and intention are bound to the flesh, and which develop outside it? Could things not move in a direction where instead of the brain being "in charge" and directing her actions in the computer mechanisms, the computer mechanisms start managing the majority of thinking and the brain is merely consulted on an infrequent basis?

This sense of self, this "continuity of the soul" is not a naive thing. It's a very deep and complicated thing.

It is true that our consciousnesses exist embodied in the biological processes of human brains. But it is not clear that this must continue to be the case. Stories about "downloading" minds are rather naive on the face of things. But they're not so silly if you look towards a slow movement to augmented minds and finally to minds produced outside of biology completely. And in that context, questions about "copying" minds become more relevant again, as it's easier to imagine what copying means in that context.

I recommend taking a look at The Collapsium by Wil McCarthy (and some of his other works) for some interesting stories involving not just mind-copying but whole self-copying, and questions about what happens when two "versions" of the same person diverge. Diaspora by Greg Egan includes three societies: people who stick with their biological bases, people who "uploaded" their minds into physical robot shells and believe in preserving an attachment to the physical world, and people who "uploaded" into a pure mind-space. Some very interesting ideas there.

Kehama wrote:
I mean, if you start counting individual short stories, older novels, etc sci-fi has already explored the concepts of a singularity, a singularity within a singularity, AIs, human AIs, augments, consciousness uploads, generational overlap due to relativistic travel where time-debts are accrued, human evolution to the point of becoming pure energy, time travel, dimensional travel, alien/human hybrids, cycles of existence, artificial realities... yeah, I'm getting tired just thinking of all of the topics I've read about in various sci-fi stories.

The SF changes with scientific fashion. In 15 years there will be some new area of science or tech that's in vogue, and all these hard SF writers will be running around writing up scenarios involving that tech. You can't predict what this tech will be, but you can feel confident that something new will turn up. It always has during the past 500 years.

Incidentally, most of these books will be forgettable, filled with wooden characters who exist only to explain the new tech, and if the reader is really unlucky, will contain completely awkward sex scenes in tacked-on romantic subplots. But some are bound to be pretty good.

Stem cells, we need more science fiction about them. And more probably it will be sci-horror, though it could do very well as human condition drama.

OG_slinger wrote:
Robear wrote:
So I'd say that SF writers would not be interested in what turns out to be a naive belief - that there is a self that can be separated from the body and exist outside of it. Much was we'd like that to be true, it seems to be wishful thinking.

Actually, they write award-winning--and totally awesome--books about said naive belief.

Oh man. I'm totally up on the academic side of this stuff, with cybernetics, embodiment, and my professor's big idea lately of "the network everting." We've actually been looking at Spook Country, Ready Player One and Reamde as part of class discussions (as well as people like Katherine Hayles). We're tracing all sorts of things from the philosophical cyberneticists of the mid-20th Century (and Phil Dick, as a sort of rogue awesomebro), through the early "We're totes going to ascend into cyberspace" cyberpunk, and back to today's realization that not only can we not escape our embodied realities, but the cyberspace we thought we would escape into has flipped inside out and colonized "reality."

Funkenpants wrote:
I don't understand. Are you saying this isn't a very common concept in science fiction?

Yeah, I phrased that badly. The concept I was getting across is that the idea of a self, consciousness, soul or whatever that exists separately from the body is hardly new to humanity, and so is not really a "next step" for SF. It's also turned out to be a naive belief in the sense that what appears to us to be true is not. So yes, it's a great vein to be mined by all literary genres, but it's not new or cutting edge - it's not really the kind of thing that would change SF or that it would have to "get ready" for.

Demyx wrote:

Except if the self is nothing more than a machine then it could be replicated. If there is no self outside of the brain, then at least theoretically an exact copy of that brain would be a second self. And if we could make an AI that replicates brains, we could theoretically make copies of selves that would be indistinguishable from the original.

Of course, the state of the machine is the self, not the machine itself. Even a cloned brain that is identical to an original would be different, given different sensory input.

Hypatian wrote:

You're overstating the case. Neuroscience is showing that there is no special thing outside of the function of the brain that embodies consciousness. That does not, however, imply that a brain's mechanisms are the only place where consciousness can occur.

Even without removing the brain from the equation, sense of self is not necessarily bound to a body.

The mere fact that we can imagine our consciousness existing outside the bounds of a human body means that it is possible.

First, I noted that *human* brains are not the only ones that can be conscious. Other strata? Sure, I think so, provided they can provide the actual functions needed. I have no problem with that, I was raised Functionalist (which, yes, has it's origins in dualism).

But in your second paragraph, you sort of sidestep out of what's accepted. We can *feel like* our sense of self is outside of our body, but that's an illusion. It can be artificially caused by damage to or stimulation of parts of the brain, and even through vestibular stimulation and visual illusions. "Sense of self" is not actually reliable when trying to determine the origin or location of the self. The brain can be fooled, and it can fool itself.

The third one, I don't understand. Because we can imagine something, that means it's possible? Even if that were true, it's got nothing to do with whether what we imagine is actually true, and supported by evidence. Also, there's a question here of what you mean. Do you mean a free-floating self with no neural component underlying it? Or do you mean that the self could conceivably be transferred to some other stratum - another brain, or a machine?


We extend ourselves with tools, and tools like the net allow us to extend our reach dramatically. Right now, the agents available online are fairly limited. We cannot transfer our intentionality into the machine. But let us imagine that we could in fact cause an intelligent agent to become active online that would make all of the same choices we would make if we were online at that moment. It can read messages, summarize them, and post responses.

It's simply a tool, right?

But then, what if we died and that agent went on working... and came across information of our death. Now it embodies the intention of the "original" body, but there is no "real person" it's working for any more. It's capable of making all of the same choices that the "original" would make, but there is no "original" any more.

Problem is that that is simply an emulator of decisions in a particular context. It does not have the *physical* input to the brain that underlies the consciousness. It does not sleep, it does not experience altered states of function, it has no rhythms, no electrical phenomenon moving through the particular brain that was once occupied by the deceased. It lacks the feedback components that seem to lead to consciousness.

It would be tremendously useful for answering emails, but it would not be the person.


Now take that to the context of GitS, where the main character is a cyborg. She has a living human brain connected to a machine body. Her brain has access to the sensory and computational abilities of that body in addition to those of the brain. As time passes, which portion of her self and intention are bound to the flesh, and which develop outside it? Could things not move in a direction where instead of the brain being "in charge" and directing her actions in the computer mechanisms, the computer mechanisms start managing the majority of thinking and the brain is merely consulted on an infrequent basis?

Honestly, I think that if the brain's functions were taken over, consciousness would fade. Why they would cease to operate when other sources were also working is another assumption that I question - I don't think the brain just leans back in it's easy chair and says "Arm processor 3 now monitors pain, I'll just turn that function off". So there's a lot there that does not seem to mesh with what's known (although I love the concept.) I don't think you can "turn off" the thinking of the brain just by having an external controller of some sort.


This sense of self, this "continuity of the soul" is not a naive thing. It's a very deep and complicated thing.

It is true that our consciousnesses exist embodied in the biological processes of human brains. But it is not clear that this must continue to be the case. Stories about "downloading" minds are rather naive on the face of things. But they're not so silly if you look towards a slow movement to augmented minds and finally to minds produced outside of biology completely. And in that context, questions about "copying" minds become more relevant again, as it's easier to imagine what copying means in that context.

I recommend taking a look at The Collapsium by Wil McCarthy (and some of his other works) for some interesting stories involving not just mind-copying but whole self-copying, and questions about what happens when two "versions" of the same person diverge. Diaspora by Greg Egan includes three societies: people who stick with their biological bases, people who "uploaded" their minds into physical robot shells and believe in preserving an attachment to the physical world, and people who "uploaded" into a pure mind-space. Some very interesting ideas there.

Yep, familiar with them. But that's not what I was addressing - I have no problem with *moving* the consciousness to a suitable substrate, or even some wave-hands way of copying it to one. It's the *disembodied* self I have trouble with, because there seems to be no evidence for it - it's that believe which is naive - that is, built on accepting the evidence of one's intuition and feelings about the world, rather than the evidence at hand. (Please, no one take "naive" as a put-down, it has a technical sense in these discussions.) Remember, the original post cites consciousness as "distinct from the body", and that's not really science-based, but it *is* widely believed to be true, so I question whether it's something SF needs to get ready for.

Robear wrote:
Problem is that that is simply an emulator of decisions in a particular context. It does not have the *physical* input to the brain that underlies the consciousness. It does not sleep, it does not experience altered states of function, it has no rhythms, no electrical phenomenon moving through the particular brain that was once occupied by the deceased. It lacks the feedback components that seem to lead to consciousness.

It would be tremendously useful for answering emails, but it would not be the person.

Out of curiosity where's the dividing line for you? When would it stop being an emulation and start being couscous in it's own right? Given that the mind arises only out of the physical processes of the brain I feel a perfect copy of those processes would for all intents and purposes be me, regardless of the medium its running on.

A "perfect copy" isn't what was described in the post I responded to. We'd need a physical matrix that reproduced the chemical and electrical activity of the brain, including sensory and glandular inputs. In other words, right now, the dividing line is the human body. If we can eventually model that at a molecular level, then that model could be said to be capable of self-awareness. But if we can't do a model of that complexity, then any self-awareness that arises will not be human.

I do think that computer programs can be and many animals are self-aware, but not in a way *identical* to humans. That is, dogs don't have "human souls in a dog body", for example. Their self-awareness is fundamentally different from ours, because their bodies are different. Even though they are self-aware, there are aspects of that that are decidedly non-human.

Also, again, in order for it to be a perfect copy of *you*, it needs both your body and your previous experiences, because those are *physically* encoded in the brain. It's entirely possible that moving a single mental state - a snapshot of the brain - into a body that has not had those experiences will not work, because the physical connections created in the first body that led to the brain state are not actually there. The new state would not find the connections it needs to access memories and might just collapse into random electrical activity as it tries to manipulate a physical network that's not there. We'd need to overcome that tight link between connections and cognition in order to copy the state of the brain.

Of course, if it's a *perfect* copy, one can argue that it already contains the physical connections needed. But just growing something with the same dna and gene expressions may not be enough - we may need a quantum-level duplicate to do it right. (There's research supporting a view that part of consciousness is based on quantum effects, believe it or not.)

I suspect we'll see artificial consciousness before we a transfer of human consciousness from one body to another.

Zona wrote:
Robear wrote:
Problem is that that is simply an emulator of decisions in a particular context. It does not have the *physical* input to the brain that underlies the consciousness. It does not sleep, it does not experience altered states of function, it has no rhythms, no electrical phenomenon moving through the particular brain that was once occupied by the deceased. It lacks the feedback components that seem to lead to consciousness.

It would be tremendously useful for answering emails, but it would not be the person.

Out of curiosity where's the dividing line for you? When would it stop being an emulation and start being couscous in it's own right? Given that the mind arises only out of the physical processes of the brain I feel a perfect copy of those processes would for all intents and purposes be me, regardless of the medium its running on.

I think it's really a matter of consistency and texture.

I always thought the Turing Test was kind of ridiculous. Why would you base a test for true intelligence on something as gullible and fallible as human perception? Then I dug a little deeper and found I was coming at it from the wrong direction. Just as a machine can't prove it's sentient, neither can another person. We technically don't even know if we are sentient in the truest sense. Therefore, the appearance of sentience is "good enough" to pass for true sentience, if there is such a thing.

So maybe it doesn't really matter if we can create the same sort of sentience we all believe we have (and cannot prove exists at all). Maybe it's good enough to create something that looks like sentience. Maybe it's good enough to have something that looks like sentience.

Over this last year, I have witnessed how malleable consciousness can be. Maybe we can't perfectly transfer a biological intelligence to an artificial platform. My question is, why does it have to be perfect?

LobsterMobster wrote:

We technically don't even know if we are sentient in the truest sense. Therefore, the appearance of sentience is "good enough" to pass for true sentience, if there is such a thing.

Well, I'd say we'd each conclude that we are, ourselves, sentient. It's other people who we can't be sure of. (Some more than others, runs the joke.)


So maybe it doesn't really matter if we can create the same sort of sentience we all believe we have (and cannot prove exists at all). Maybe it's good enough to create something that looks like sentience. Maybe it's good enough to have something that looks like sentience.

Over this last year, I have witnessed how malleable consciousness can be. Maybe we can't perfectly transfer a biological intelligence to an artificial platform. My question is, why does it have to be perfect?

That's one of the ambiguities in the discussion. The original statement was that conscious existed independent of the body. I think that's essentially a non-starter. Then it morphed into "Well, what *could* support consciousness?" and then "What is useful consciousness, anyway?" as well as "Can human consciousness be transferred to another container?" and "Are there other consciousnesses that are different from human?". So we're starting to get ideas mixed up here.

Aw, from the thread title, I assumed this would be a discussion of sexbots.

Quintin_Stone wrote:
Aw, from the thread title, I assumed this would be a discussion of sexbots.

Why would I need a sexbot when I have your mom?

Quintin_Stone wrote:
Aw, from the thread title, I assumed this would be a discussion of sexbots.

Considering how drunk the parties in question can be and still participate, I don't think that's a good metric for "consciousness".

I always thought the Turing Test was kind of ridiculous. Why would you base a test for true intelligence on something as gullible and fallible as human perception?

Good job on digging further there. Anytime you think Turing was being stupid, it probably means you don't have enough data yet.

The man was a titan.

More on topic: there's lot of research now that an awful lot of what we call consciousness, isn't. It's mostly illusion. You make decisions on a nonverbal level almost all the time, and then your conscious mind chases along after the fact and makes up stories about why you chose to do that, which often have no actual connection to the real reasons.

They've seen this in patients that have had their corpus callosum cut, which is the link between the right and left halves of the brain. If they whisper in one ear to get up and go to the refrigerator, the patient will do so. If they then ask the other half of the brain why it got up, it will say because it was thirsty, and it will be absolutely convinced that this was why it made the trip.

In other words, in many respects, consciousness is an ongoing, baldfaced series of lies.... which the brain isn't even aware of telling.

OG_slinger wrote:
Robear wrote:
So I'd say that SF writers would not be interested in what turns out to be a naive belief - that there is a self that can be separated from the body and exist outside of it. Much was we'd like that to be true, it seems to be wishful thinking.

Actually, they write award-winning--and totally awesome--books about said naive belief.

Yes they do.

Those are both very good books, but wow is Diaspora ever depressing.

Malor wrote:
Those are both very good books, but wow is Diaspora ever depressing.

Really? I found it to be quite the opposite - it was weirdly uplifiting.

Oh, I thought it was just awful, at the end. I don't want to get into spoilers, but I was bummed out for a good couple hours afterward.

Oh, by the way, if you liked those books, check out The Quantum Thief, by Hannu Rajaniemi. It's kind of in the same general vein, though what vein that is, precisely, is hard to describe.

wordsmythe wrote:
OG_slinger wrote:
Robear wrote:
So I'd say that SF writers would not be interested in what turns out to be a naive belief - that there is a self that can be separated from the body and exist outside of it. Much was we'd like that to be true, it seems to be wishful thinking.

Actually, they write award-winning--and totally awesome--books about said naive belief.

Oh man. I'm totally up on the academic side of this stuff, with cybernetics, embodiment, and my professor's big idea lately of "the network everting." We've actually been looking at Spook Country, Ready Player One and Reamde as part of class discussions (as well as people like Katherine Hayles). We're tracing all sorts of things from the philosophical cyberneticists of the mid-20th Century (and Phil Dick, as a sort of rogue awesomebro), through the early "We're totes going to ascend into cyberspace" cyberpunk, and back to today's realization that not only can we not escape our embodied realities, but the cyberspace we thought we would escape into has flipped inside out and colonized "reality."

I'd so love to be in that class. Love those books!

Diaspora looks similar to Ilium by Dan Simmons. I loved that book and think it presented some pretty far-out ideas.

Malor wrote:
Oh, by the way, if you liked those books, check out The Quantum Thief, by Hannu Rajaniemi. It's kind of in the same general vein, though what vein that is, precisely, is hard to describe.

Thanks for the recommendation - will totally check that out.:)