Are not human beings thinking machines, only biological rather than electronic?
Perhaps not, but if there comes a time where there can't be a single, shared definition of thinking, consciousness, sentients and self awareness that can be used to distinguish between human beings and say the metaphorical Lt. Cmd Data or HAL 9000, then I think the answer is yes - it is unethical.
As long as a definition can be found that applies to all human beings without applying to any machines, then its arguable in my opinion.
The trick is that although it sounds like it would be an easy thing to define, it gets more and more difficult as time goes on. Already, there are computers that can 'think' better than some humans can. That doesn't mean the computer is sentient, conscious and self aware, but it does serve as an example.
There are cases for applying ethics to patients that are in vegetative states. ANY computer would pass ANY intelligence, thinking, self awareness, etc.,... tests better than a human in a vegetative state though, so....
We then make the distinction based on silicon vs. carbon based. Then we have to consider how DNA is being reasearched as a storage media for computers.... I guess that distiction goes out the window.
Then there is the idea of a soul. Well, when it comes to law - codified ethics, how exactly does one use the concept of soul in a court room without bringing religion into the debate? Religion being something that has no place in a court of law - where the Bible isn't a valid source of evidence - despite being what most people swear on for at least symbolic reasons.
To acknowledge machine intelligence is to acknowledge that human being are not special. To acknowledge Darwin is to acknowledge humans aren't special. To acknowledge God is to acknowledge humans are special. Humans would rather be special than not - to the point that we will accept irrational arguments to protect that special feeling.
Very interesting question that I think people don't really want to think about. Machines needing ethical consideration doesn't elevate machines, but rather, lowers humanity - at least that's how some see it.
2007-01-28 12:36:30
·
answer #1
·
answered by Justin 5
·
0⤊
0⤋
Well now this is certainly a question that touches many areas. When I read your question I immediately defined it as Woman. Not only are we responsible for the home, family, children and finances but we also work outside of the home. Essentially, we not only hunt, kill, clean and cook the bacon but we also clean up after-wards. We do laundry, wash the car and I don't know about most other people but I have learned to work on my own vehicle. I clean the house, take care of my sick mother as well as my disabled mother in law and manage to live up to my obligations as VP and CFO for my company, the best part of this is I do not earn nearly as much as my predecessor who was a male . If they ever invent such a machine I will be the first to purchase one because I am tired of this male dominated society. No offense intended, this question just hit a hot spot with me. Thanks for asking.
2007-01-28 07:32:02
·
answer #2
·
answered by Winwon (Cherokee Nation) 2
·
0⤊
0⤋
I believe ethics only applies to the actions of individuals who are capable of making decisions of conduct according to precepts or understandings of right and wrong. As in the case of your question, we would have to define thinking. If by thinking you mean that the machine is able to manipulate the outside world and find value in its actions, then I believe it would be unethical to prevent the machine from engaging in those activities that it values, unless of course those actions limit the occupational freedoms of others. This may sound strange but I believe that future technological advances are going to need an occupation-based ethics. Currently there is limited literature on this topic, but if you want to learn more you could study the history of ethics and then study occupational science.
2007-01-28 07:23:53
·
answer #3
·
answered by Jason R 1
·
0⤊
0⤋
There's a difference between thinking...and feeling...If the machine knew fear, love, knowing right from wrong and had the will to live if it's life was on the line, then you're talking about it being just like a human....If you're just talking about it learning, and processing information...that's different...they're...not aware of themselves....of emotion...
Machines also can go without 'resting' for longer than humans can, so making them do those things...wouldn't be as hard on them as it would be on us. As long as we took care of them, like we should do with our pets then I don't see the problem, but if they're as smart as us...and can fall in love and talk with us...that's another difference...
If they did feel...of course you couldn't destroy them...just like you couldn't destroy another person...
People would be scared of such a thing...older people at least....these generations and later ones, won't be as much. They'll just expect it to happen some day.
Now will they treated the same as humans? No...that'll be a new type of hate...I'm human...and you're not.
We SHOULD treat them like we'd expect to be treated, just like we should treat each other...
2007-01-28 07:37:13
·
answer #4
·
answered by Anonymous
·
0⤊
0⤋
Depends what kind of thinking it can do. If it has a fully developed consciousness, then I would say it would be wrong.
The problem that is coming in the near future, is that people will be uncertain of how fully developed the consciousness is in everyday robots. Some will be afraid to "enslave" any machine. They may become suspicious that their toaster has a conscience.
Others will think computerized consciousness is never real, and they will take delight in torturing robots, and giggle at their electronic screams. Then the robots will become very, very angry.
2007-01-28 13:15:40
·
answer #5
·
answered by coconutmonkeybank 3
·
0⤊
0⤋
There was an episode of Star-Trek: Next Generation where this guy who was doing android research wanted to disassemble Data so that he could make lots of copies of him. I think they had to take the whole thing to a galactic court room, which Jean-Luc Picard arguing for Data and "Number One" had to argue against Data (to disasemble him). Anyway, Data was not disassembled because of the parallel to slavery.
It depends on what you mean by think though. It the machine is really good at solving problems but isn't conscious I don't see a problem. But if they are conscious then it's wrong I think.
2007-01-28 07:11:43
·
answer #6
·
answered by Anonymous
·
0⤊
0⤋
A thinking machine would be able to learn to take over the world.
I don't think you could make it wash your clothes and mow your lawn for very long.
2007-01-28 07:14:32
·
answer #7
·
answered by Havana Brown 5
·
0⤊
0⤋
i think of that the way we improve animals for meat is faulty. there are innovations consisting of fixing right into a member of a interior of reach co-op or procuring farm clean meats. yet regrettably the way we mass improve and kill animals is a product if our industrialization. that's a extra useful way of feeding extra people. Mass entrepreneurs of chickens and pigs and such do attempt to kill the animals humanely. yet what may well be executed to stop those tyrants? the only element to do is to instruct to unfastened selection and grain fed interior of reach meat. regrettably maximum folk do no longer likely think of the place that foodstuff interior the food market or on their plates comes from. for this reason maximum could no longer care much less approximately how the animal is dealt with. The better we are removed from the approach the less we, as a species, care approximately it so it might look.
2016-11-01 12:44:16
·
answer #8
·
answered by ? 4
·
0⤊
0⤋
treat it like its the best thing in the world, machines are human too
2007-01-28 07:06:59
·
answer #9
·
answered by MiKe Drazen 4
·
1⤊
0⤋
wait, a thinking machine is us...So its kind of unethical, but i'm no authority on this..
2007-01-28 07:12:54
·
answer #10
·
answered by Tom 4
·
0⤊
0⤋