English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Imagine I build a robot using the Biblical (or Qur'anic, or whatever) behavioral injunctions as algorithms, so that the robot acts in complete accord with these rules. Will this robot be ethical, or only mimic what some might call ethics? This leads to my actual question; can someone functioning from externally given rules (like those in a book) be called ethical, or are they simply mimicing moral-like behavior?

2007-11-26 05:50:48 · 6 answers · asked by neil s 7 in Society & Culture Religion & Spirituality

6 answers

Presuming that the rules are ethical to start with, a human following the rules would be ethical because he is _choosing_ to follow them. Yes, the rules are laid out, but he can still choose not to follow them. A robot, on the other hand, has no choice. By his nature he is forced to follow the rules.

Ethics are a choice. The robot has no choice.

2007-11-26 06:19:05 · answer #1 · answered by Nightwind 7 · 0 0

If ethics is only "right behavior" or a moral code that can and must be followed, then ethical behavior is whatever works best for the greatest good of society.

Most people would think ethics involves moral choice.

Traditional stepwise programs that list every action for every possibility have every choice laid out. Neural networked self learning programs however superficially appear more like the type of thinking used in solving moral dilemmas.

The problem of free will verses predestination is of course releevant here. I.e. If God knows what we will do already or sets up the conditions in a mechanistic universe then no persons exept God are responsible for their actions.

Heisenburg's uncertainty principle and related theories leave room for arbitrary choice, which may be needed for moral choice to be possible. My thesis is that everything that happens, is the result of choices from persons capable of morality.

In the New Testament (Romans), a story is told of a man who blamed God for his future moral choices. He stated that God knew ahead of time what his choices would be, and therefore since his future was set, he was not responsible fopr his actions.

God supposedly answered something to the effect that "God knows what you will do, but you don't, so choose to the right thing".

During my Computer training, I was told the story of a robot dog that was programmed with likes and dislikes, and was tortured in ways that a real dog would have a nervous breakdown. The robots circuits did get scrambled, rendering the robot useless.

If we assume that the robot dog passed the "Turing test" in this instance there is still no was to determine whether mental events or moral choices apply.

From a pragmatic point however we will need to act as if robots have real feelings and not torture robots if we don't want them to breakdown or go "postal" on us.

2007-11-27 09:44:58 · answer #2 · answered by Graham P 5 · 0 0

The robot will surely be ethical on the manner they are programmed. Whatever the program within his system will definitely be the result. The behabiour will not be moral since the issue being presented is not the real human but an artificial inventions and thereby doesn't follow the morality expected from a human being. It is just making a fake real. However, this may be a good example for humans to follow a robotic system using the goodness expected from us.

Thanks for asking. Have a great day!

2007-11-26 19:18:14 · answer #3 · answered by Third P 6 · 0 0

A robot can't think. It can only follow a program. If it were perfectly programmed with the appropriate mandates and prohibitions, it could technically be "moral", but it would not be able to adapt to situations. Even if lives were at stake, it would not be able to break its rules. If the robot were programmed with a more sophisticated algorythm of hierarchical or conditional rules, it might not be able to function, given the wrong combination of conflicts.

No holy book can cover every situation, because they are written by humans, who can't think of everything. At best, general principles (such as preservation of life, freedom and human dignity) can be applied to a novel situation, which requires a level of abstraction and subtlety that may overreach any programmer's ability. A robot with such a capability would have essentially a human ethical capacity, able to tolerate uncertainty and to make arbitrary (not random) decisions. Less than that, it's a clever simulation.

The difference is the capacity to care. Care causes preference, desire and bias, seemingly antithetical to ethics. But without it, with pure dispassion, an artificial intelligence would likely be unable to resolve a serious moral crisis in a human-like way.

2007-11-26 14:35:06 · answer #4 · answered by skepsis 7 · 0 0

Being Ethical and having morality is not the same.

2007-11-26 13:55:25 · answer #5 · answered by rikirailrd 4 · 0 0

Morals are irrelevant. They are used by one to think highly of ones self, and to feel as though they are closer to god.
However, most people who say that they live "moral lives", are the most hypocritical liars, and are most likely to go to the hell in which they condemn non-believers of their faith.

2007-11-26 14:00:12 · answer #6 · answered by Zaya the Slaya 3 · 0 1

fedest.com, questions and answers