________________________________________________________________________ Every intelligence is unique, an island. But one which relies on doing most of its information storage off-shore. Mind-reading is impossible, but the idea that the mind exists outside the body is not far-fetched in the least. To: editors@sciam.com From: Carl Lumma Subject: thinking machines Date: 9/19/99 [revised 12/99] Kudos to Raymond Kurzweil on his article in the latest "Bionic Future", for delivering the important news: the majority of us will live to see machines more intelligent than the majority of us put together. The spirit of the article (and of the entire journal) is another matter, and raises important questions we should not climb here. I will say that after some thought I am left asking: Why an axe to treat symptoms when a scalpel could cure the disease? Kurzweil's article makes its central argument conclusively: the creation of intelligent machines is not a question of theory, but a question of engineering. It seems obvious that the universe allows, even selects for intelligence when conditions are right, and the right conditions are not hard to come by in the scheme of things. The article does involve a non sequitur, however. The author states that artificial minds will be far better at exchanging knowledge than human ones, but from what does this follow? He has even got humans downloading information learned in artificial minds. It is a celebrated result that information content is audience-specific, and in this case the audience is defined (at least) as a particular set of sense and action organs. Significant information can only be communicated when these are significantly alike -- consider the failure of expert systems to deliver intelligence. Humans will never learn to swim from robot fish. Even if two intelligent systems shared identical sense and action organs, there is no reason to believe they would store information in the same way. Defining "audience" may also require a complete history of the particular sensory data that a system has experienced in its life -- knowledge of the myriad "frozen accidents" that shaped its evolution. The author also discusses with great enthusiasm the related task of copying a mind. He used the idea in a brilliant dialog in _The Age of Intelligent Machines_ to reach a paradox, assuming only that the mind results from a machine like a lever. Is it possible that the author's namesake in the dialog was correct -- that the mind is NOT a machine _like a lever_? I believe that minds exhibit sensitivity to initial conditions, such that copying a brain by duplicating the state of every neuron would be insufficient to reproduce behavior after some small number of cycles. Assume that the original brain could be halted and observed, that observation wouldn't disturb it in any way, that an exact reproduction could be made, and that both brains could be restarted at exactly the same time. The output of the two brains would still differ after a few cycles since they would be receiving different sensory data. I don't believe that intelligent behavior is possible without control over sensory organs, and a pair of identical organs cannot occupy the same point in space, nor can a shared sensory organ respond to two different control instructions simultaneously. Strangely enough, it may not help if the original brain happens to be simulated on a Turing machine. The requirement of intelligence means a certain time scale for things -- sensitivity to initial conditions seems more likely an important consequence of intelligence than an artifact of its implementation on organic hardware. And related to the necessity of control over sense organs is the idea that intelligent machines rely on storing information externally. So even the discrete workings of a Turing machine may be quite abstracted from anything particular when they drive intelligent behavior. -Carl