English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Why do other people think that all robots may somehow develop some sort of error and suddenly become destructive. Even in movies it was being showed! Why didn't they think of a safety mechanism that could stop the robot just in case that happened?

2006-07-01 18:25:48 · 5 answers · asked by Dhan Louie ☺ 2 in Science & Mathematics Engineering

5 answers

Its human nature to fear what they don't understand.

2006-07-01 19:10:13 · answer #1 · answered by Rescue76 3 · 1 0

About 30 years ago Issac Asimov, a great Science Fiction Writer, came up with the 3 laws of robotics, which should be put into any AI (Artificial Intelligence).

1. A robot shall not in any way harm, or allow a human to come to harm.
2. A robot shall protect itself unless this violates the first law.
3. A robot shall always obey the orders of a human unless it violates the first three laws.

With these laws any AI will be safe They should not be programed, but HARD WIRED into the machine.

In movies the only robots that hunt people are those equipped with AIs. For some reason Hollywood thinks that berserk robots are a big threat; or at least they think it makes good scripts. Of course this isn't a new idea, they are all based on the horror novel Frankenstein.

2006-07-02 01:34:30 · answer #2 · answered by Dan S 7 · 0 0

Robots are built by humans - the occasional human develops some sort of error and suddenly becomes destructive. Why should robots be any different?

2006-07-03 18:38:15 · answer #3 · answered by Paul 3 · 0 0

Haven't you seen I Robot? Its kinda hard because robots will only have an "artificial" intelligence, so there are no real emotions linked to the thinking process(es) of the machines. Besides, they're man-made. They're bound to malfunction somehow, like the computer you used to submit this question.

2006-07-02 01:36:07 · answer #4 · answered by drakeflare73 2 · 0 0

Did it ever occur to you that robots, when mass produced and swarmed could develop a macroscopic malfunction that arises from their being in bunch and working in bunch while interacting with humans and therefore this could not have been predicted in any robot by itself and therefore no safety mechanism could be developed.

2006-07-02 01:28:38 · answer #5 · answered by Anonymous · 0 0

fedest.com, questions and answers