depends on what the programmer sees is right or wrong.. so the robot or whatever it maybe, would not make a "decision" after thought but after the execution of a compiled human build program
2007-08-16 04:09:46
·
answer #1
·
answered by Anonymous
·
1⤊
1⤋
It is possible to program every law into a robot, kind of like robocop. It is also possible to program several if-then commands into robots that will let the "decipher" right from wrong.
Thing is that you cannot program humanity or program a soul. The human will to live and flourish will outlast any living creature or robotic creation. Humans react based on reflexes and experiences and CHOICE. We don't react based on programmed if/then statements, we react based on want and need even if it goes against our learned behaviors or what we have been taught. A robot can never make its own decisions based on emotion unless it is programmed to occasionally break a programmed rule. Either way it is programmed, and not reacting naturally.
AI does not prove that humans do not have souls, it only proves that humans are smart enough to create a machine that mimics life behaviors. This is likely why we have been able to inhabot the earth as long as we have and barring any uncontrollable asteriod or freak of nature from weather, we will likely inhabit the earth until god is ready to evolve us into something else or human 2.0.
2007-08-17 08:12:00
·
answer #2
·
answered by handsome_bigfella 5
·
0⤊
0⤋
The 'concept' of right and wrong? I don't think it's a concept, it is a reality. I'm sure an AI being can be programmed with knowledge of every single law, but there are other areas where law does not apply, such as being rude or hurting someone's feelings. Although I'm also sure an AI being can be programmed with manners as well. I guess we'll just have to wait and see.
2007-08-16 04:13:32
·
answer #3
·
answered by Anonymous
·
0⤊
0⤋
AI's would have to first have the capability to self direct - or choose a goal that has not been set for them, which I don't think they have yet.
I think a realistic answer for that part of this question will have to wait for a few years. As to the soul part, yes it should prove that souls don't exist, similar to when a dog that is normally house trained gets annoyed at being left alone and urinates. This occurs often, they know it's wrong and will appear to be contrite even without cues from their master.
2007-08-16 04:12:03
·
answer #4
·
answered by Pirate AM™ 7
·
0⤊
0⤋
yes .. once human finally discover and understand the brain fully , i dun see why it isnt possible.
when we are trap in a room , one person will try all possible way to get out , and use all possible tool. program it into a chip and the chip will react the same way as the person .
of cos its thereotical but its possible that 1 day we reach the point when the robot can get the concept of good or bad .. but sooner or later they would want to preserve their own existence =P
2007-08-16 04:13:03
·
answer #5
·
answered by Curious 3
·
0⤊
0⤋
this might nicely be a straw-guy argument. You anticipate faith and intelligence are proper. If we are able to create man made silliness, then possibly we've something… Joking aside, actuality is that guy consists of the two ideas and heart. the heart is a desire to get carry of and the ideas is the device to fulfill that want. It’s the heart that drives guy, and the ideas shows a vogue, being to paintings, to thieve, to have money, honor and means or maybe to the factor that he will create a fable that a brilliant reward awaits him after dying. question is, will we create an (man made) countless desire to get carry of? At that factor that is going to be self evolving, because of fact this want will consume the finished planet, till ultimately it is going to understand what's finding for can't be chanced on.
2016-10-02 10:58:10
·
answer #6
·
answered by ? 4
·
0⤊
0⤋
On the contrary this would proof that , machines or robots have no free will, their programed to behave how they behave, however people no matter how they are trained or brought up, because of their free will tend to stray from their earlier mind set , good or bad decisions are a product of our own free will, that God gave us in the beginning, see regardless of what some unbelievers might say, God doesn't want a bunch of mind controlled zombies, just doing everything he tells them without question, he gives us the free will to decide what our true believes are, many through their own bad choices in live choose poorly , those of us that choose well are rewarded with life, behold I put before you life and death, choose life ; Aids ; Abstinence ; Murder ; Forgiveness ; Life ; Death . choose life GOD BLESS!!
2007-08-16 04:39:16
·
answer #7
·
answered by MOPE DE VOPE 2
·
0⤊
0⤋
One HUGE problem with your theory. Just how reliable are computers? They break down and malfunction, don't they.
The idea of making any computer distinguish and decide a moral code means bringing the computers to a certain level of AI.
Can you really trust a computer to understand the relevance of right and wrong? And if you can reprogram a computer to be a "good" computer, just how long do you think it will take for a hacker to reprogram the computer to be not so good?
So, even if a computer CAN be programmed, I don't think you can trust that the computer will be able to always distinguish between right and wrong.
2007-08-16 04:10:35
·
answer #8
·
answered by Searcher 7
·
1⤊
1⤋
If its artificial,it doesnt have a soul.And it will follow whatever program is instilled in it regardless of the consequences.
2007-08-16 04:14:48
·
answer #9
·
answered by iron maiden77 5
·
0⤊
0⤋
you can program anything into something however, you lose the point of freewill and freedom to make that choice.
2007-08-16 04:14:34
·
answer #10
·
answered by . 3
·
0⤊
1⤋