English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

3 answers

All life, by definition, experiences noxious stimuli . . .pain . . .and tries to move away from it. We then learn from that response. Learning is required for intelligence. Therefore, true "organic", or growing, intelligence would require such programming, unless the variable of movement and that all living things move (such as moving away from noxious stimuli) is critically influential in the sequence of learning and intelligence, which is a variable not well understood in contemporary science, yet. Then, if the artificial intelligence does not have the ability for movement, "organic" or growing intelligence would be alternative.

2006-11-12 09:13:06 · answer #1 · answered by Anonymous · 0 1

It all depends on your definition of pain and suffering.

In the most simple terms, pain is a stimulus that the body produces to tell the brain that something is wrong at a most basic level. Usually it happens when there is damage inflicted to the body.

With this definition, it would be stupid not to include pain in the equation for artificial intelligence. Providing a feedback system that tells the processing circuits that part of its body is not functioning correctly and may be damaged could greatly increase the reliability and integrity of the system.

The bigger ethical question would be would it be ethical to indiscriminately cause pain to an artificial intelligence. While torture (That is what I am talking about) is immoral to do to humans, would it be ethical to use such methods on an AI system. If the system is truly intelligent, then such treatment could cause a form of psychological malfunction in the system. (It would be different from a human breakdown because of the different structure, but the unreliability of the results would be similar in nature.) After such damage is done, it would likely be difficult to repair the system without a complete reset.

2006-11-14 01:38:34 · answer #2 · answered by CoveEnt 4 · 0 0

Interesting.

In creating 'artificial life' you have the choice how you create it. Why would you want to inflict suffering and pain on something you created unless you could prove it is necessary to do so?

I don't think we have dealt with this yet but it's probably a long way off.

If history is anything to go by most would agree it is wrong to inflict pain and suffering but the world is full of it.

Poor robots and such, lets make them free form suffering and pain. Remember they will make your tea, cook food, build your cars, be our slaves etc.

Edit: I take back the bit about it being a long way off, clearly a lot of thought has went into this already.

Not 100% convinced about the pain argument but I get the point.

If we take the definition of suffering (from wikipedia) -

Suffering is any aversive (not necessarily unwanted) experience and the corresponding negative emotion. It is usually associated with pain and unhappiness, but any condition can be suffering if it is subjectively aversive. Antonyms include happiness or pleasure.

The question would then be is it ethical to put in algorithms for emotion. Presumably humans could merrily torture these artificially intelligent lifeforms free from guilt as they would not suffer without emotion algorithms.

We could build the perfect slave, is this ethical?

2006-11-12 17:09:44 · answer #3 · answered by D.F 6 · 0 0

fedest.com, questions and answers