English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

In "The Singularity is Near" by Ray Kurzweil, the guy scared me by saying that computers will one day be able to learn, make mistakes, and adapt to their environment - but how can they if they have no reason to adapt? Natural selection requires some motivation - - or just good programming? He also says computers can be creative, which has been, to me, always a sense of a human's freewill... is any of this possible?? I'm scared but reaaaaaaaally fascinated....

2007-07-23 17:20:16 · 4 answers · asked by Anonymous in Science & Mathematics Engineering

A lot of what he says was very mathmetically legitimate, talking about the way the brain makes neural connections, and comparing it to a computer program using "cellular automation" (a lot like controlled chaos).

2007-07-23 17:23:05 · update #1

A lot of what he says was very mathmetically legitimate, talking about the way the brain makes neural connections, and comparing it to a computer program using "cellular automation" (a lot like controlled chaos). This is hard for a biology major to hear...

2007-07-23 17:24:20 · update #2

yarr - that's really interesting... a computer automatically saving it's own energy - it's KINDA like survival of the fittest... like if you put more than one of these computers on the same circuit, can you make them fight over electricity?

2007-07-23 17:32:56 · update #3

You've all been very helpful - but, philosophy-wise, can you singularity fans tell me if my freewill is endangered? If the environment I've been adapting to my whole life, the computers I've been working with are now thinking like me, part of me, learning alongside me and (Possibly??) competing for "survival"...then does the definition of freewill or volition change in the future?

2007-07-23 17:58:28 · update #4

4 answers

Yeah I've read that singularity stuff and to be honest with you, I can't legitimately disagree with it. I saw an interesting video called "shift happens" that illustrated a lot of these points (I'm too lazy to dig it up right now but maybe you can search for it) anyway the premise is that instead of man coding software that is capable of thinking for itself (which, ask any AI programmer, is an incomparable feat of software engineering), man will create software that is capable of emulating the human brain. And if you keep reading the singularity stuff, they even get into the ethical issues like if you could copy your brain into a computer, would you? Anyway I have to agree, it's scary, but that's mainly just because it's something completely different than what we're used to.

2007-07-23 17:33:37 · answer #1 · answered by Anonymous · 0 0

This isn't a matter of natural selection; computers don't reproduce themselves. It does take some very good and imaginative programming to pull this off. Creativity may not necessarily mean "freewill"; rather, a computer would seek an alternate solution to a problem if the usual approach fails to work.

We already have computers that can strain the resources of a good chess player; it's just a matter of expanding the repetroire of the machine.

2007-07-24 00:29:41 · answer #2 · answered by cattbarf 7 · 0 0

it would be a dna based computer. A super computer to speak of. Some computers self correct them selves. my computer if the battery is not charging it brings down the brightness of teh screen so i will use the least amount of energy. Computers will evolve, and have a free will.

2007-07-24 00:29:55 · answer #3 · answered by yarrrr... 2 · 0 0

Yes. Some experiments have been done along these lines, but computer horsepower is still far less than human brain horsepower so there is a long way to go.

2007-07-24 00:25:19 · answer #4 · answered by Anonymous · 0 1

fedest.com, questions and answers