Friday, November 14, 2008

Future of Humanity

Oxford's Future of Humanity Institute has published a detailed roadmap of what it would take to create a computer that behaves like a human brain--free will and all. The authors of the study take "free will" into account by substituting "sufficient noise in the simulation."

While admitting that the experiment is "fascinating," Nicholas Carr has some issues with this, and says the "Future of Humanity Institute seems to be misnamed."

I also have issues with the idea, but I think Carr misses the central flaw here, which is that free will isn't about "noise." Humans behave unpredictably because they have varying levels of morality and completely unpredictable motivations that drive them.

Let's put it this way, from a purely probabalistic point of view, a certain percentage of human being drown kittens; if you build that into a model, it will work out that a certain number of times that a robot brain is confronted with a kitten, it will drown it in the correct proportion of times.

But the fact is that some people will never drown a kitten, some people will drown a kitten only because there is no other alternative (i.e., can't afford to keep it), while others will drown kittens every chance they get.

No amount of "noise" will reflect that in a truly realistic manner.

No comments: