Sorry to parrot Gir, but here's the idea. Can intelligence exist without consciousness? If not, than if (and yes it's a big if) we ever perfect artificial intelligence, then we will have created conscious robots. And if consciousness isn't the soul... well let me put it this way: that'd be a loophole out of going to hell. Afterall, a robot is capable of making all the same choices as a thinking human being (though, to be fair, this is IF we perfect AI): shouldn't a robot's consciousness have the same rewards and punishments? Oh, I know, you'll use the "Well the humans that created them inherit their rewards/punishments" but by that logic, are hitler's parents responsible for his crimes? "No, because hitler was able to have responsibility for his sins placed upon him: a robot is unable to"
... so, if by some chance I somehow became Christian, I now would want to be a robot. As if laser eyes weren't enough in the first place!
2006-08-16
15:51:31
·
13 answers
·
asked by
Anonymous
in
Society & Culture
➔ Religion & Spirituality
The question (arguement really) was "Why do robots get all the fun"
If you assume the majority of mankind ISN'T going to heaven, then the ability to go to heaven or hell is more of a curse than a blessing (on average) so robots are really "blessed" unless the magnitude of hell is significantly less than the magnitude of heaven, or you somehow get rid of that "most of the world is going to hell" part.
2006-08-16
15:58:17 ·
update #1
This is a great question because it seems likely that one day robots (or computers at least) will acquire some sort of artificial intelligence. My guess is that it won't actually happen to a single powerful computer in some secret lab, but it will occur on a world-wide scale, as individuals keep storing and saving more and more information, and continue to become more and more connected. The web will keep getting more complex until it starts to resemble the human brain.
What seems likely is that eventually we will be recording so much of our experiences into machines, it will become harder and harder to distinguish between the two. At some point, we should (theoretically) be able to download ourselves into digital storage. The problem is that the human brain is able to cross-connect unrelated data and make decisions based on inference, something that may not be possible with non-organic memory storage.
Intelligence is closely tied to pattern recognition, and by that measure, we are getting closer to making intelligent machines, but we are still orders of magnitude away from a machine that can look at two pieces of art and declare one is an amateur finger painting by a 5 year old, and one is a masterpiece by a professional artist. Taste may be something forever reserved to humans.
On the other hand, lets say you program a robot with the laws of the land, and then empower the robot to execute human criminals who break those laws. It is making (arguably) intelligent life and death decisions, albeit based on our coding. If the robot killed an innocent person by accident (don't ask me how :), would the programmers be at fault, or the society that elected to put the robot in the position of executing people? At what point does a machine know right from wrong? Is self-awareness the deciding factor?
I'm not sure if computers will become intelligent on their own, or if someday humans will discover a breakthrough in A.I. and create an intelligent computer. Whichever happens, my suspicion is that an intelligent machine would reject (mercifully) the concept of believing in anything that could not directly be tested/measured/proven, unless we programmed with our own aberrant concepts of faith.
Wouldn't it be cool if computers became intelligent and self-aware, and then made fun of humans for believing in god? Of course, we would be the computer's god, and when you think about it, asking the computer to explain what existed before it was built would be a very interesting question, since it would know logically that we built it, but would have no ability to conceive of its own existence prior to its becoming self-aware. In other words, would a self-aware machine declare it had a soul, or that it simply did not exist when it was turned off?
Cool stuff to think about, eh?
2006-08-16 16:25:31
·
answer #1
·
answered by Anonymous
·
1⤊
0⤋
Robots do not go to hell.
Can intelligence exist without consciousness?
-----------
Excellent question -- In response:
1) Derivative intelligence can exist without consciousness (e.g., the intelligence programmed into a spreadsheet program -- is derivative -- from the programmer and the user -- but is without consciousness).
2) Lower life-forms (animals etc) appear to have clear notions of their personal boundaries (which could be viewed as evidence of intelligence), but no clear evidence of self-consciousness or contemplative consciousness... However they appear to be conscious, in some sense...
3) Even lower life-forms, e.g., bacteria -- seemed to have programmed intelligence -- e.g., chemotaxis, or thermotaxis, or phototaxis -- but no further consciousness. The odds are that such programmed intelligence is derivative intelligence rather than some other kind of intelligence. I.e., Bacteria are more like very complex machines than anything else.
4) It is possible for us to program a machine to pass the Turing test. However it appears possible to do this without creating any consciousness in the machine (or software).
5) Roger Penrose (mathematician) argues (I believe in the Emperors New Mind) that human consciousness is something that can not be captured through AI -- i.e., no matter how complex our AI software and hardware become, he argues (I believe) that human consciousness is something of a different kind -- and can not be captured or created by any AI...
--------------
Do Robots dream of Electric Sheep?
Cordially,
John
2006-08-16 22:56:13
·
answer #2
·
answered by John 6
·
0⤊
0⤋
If it can't then there would be no GOD to worship that made your soul.
IF we did have AI robots they would still have to be programed by faulty humans mind.It would have to take some large quirk for a man made object to have a soul put in.
Until we become like Gods we could not make a soul to put in one. The CREATOR OF ALL THINGS had to use bits of its body to make the souls of all that was and will be before IT made the physical universe.
2006-08-16 23:07:17
·
answer #3
·
answered by Anonymous
·
0⤊
0⤋
Robots wont think they are going to hell unless we program them to... and if I was programming them, I'd leave that out because I want them to be pragmatic and curious - not bound by human religious thinking.
2006-08-16 22:58:25
·
answer #4
·
answered by Chris M 2
·
0⤊
0⤋
That's not a human. That has no hell nor heaven.
There is no chance that you would become a robot someday.
2006-08-16 22:56:05
·
answer #5
·
answered by SFNDX 5
·
0⤊
0⤋
You have to have a soul to get into either Heaven or Hell...do robots have souls? There is your answer.
2006-08-16 22:57:22
·
answer #6
·
answered by redeye.treefrog 3
·
0⤊
1⤋
first, you need a psychiatrist
second only those with a soul go to heaven or hell
three, should you be allowed to play with electrical things like computers?
2006-08-16 22:57:23
·
answer #7
·
answered by firechap20 6
·
0⤊
0⤋
They go to the junk yard
2006-08-16 22:57:24
·
answer #8
·
answered by cherrygurl 3
·
0⤊
0⤋
They don't go anywhere. They stay here on earth to rust.
2006-08-16 22:56:14
·
answer #9
·
answered by Art The Wise 6
·
0⤊
0⤋
Don't you watch Futurama? Of course they do.
2006-08-16 23:07:49
·
answer #10
·
answered by 138+ 2
·
0⤊
0⤋