| | Steve,
You said, "The burden of proof is on you -- to provide support for the existence of a brand new entity..." But I'm not saying such a thing exists now. I didn't mean to imply that you're saying AI exists now.
If so, if I had meant to imply that, then you would be right to say that I have you "all arguing in an area that doesn't even apply." However, that's not my issue with what you've been saying. Later on in this post, I'll quote John R. Searle and make several comments that hopefully clear-up this misunderstanding between us.
What has been said is that no one knows of a reason why volition would be limited to man for all time. First let me be clear that whenever I say "volition" or "human intelligence" or "rationality" or "reason" or "free will" -- I'm talking about a specified human faculty (even if -- or until -- a new entity gets discovered which possesses these things). An argument about volition impacts all these other things (because reality is connected like that).
The easiest integrated connection is between reason and rationality. You can't have one without the other. It takes a slightly greater scope of integration to understand that you can't have reason without volition -- but it's true, nonetheless. So argument about one extends to the others (because of reality's connections). Let me know if you need me to prove that explicitly and formally.
If someone in the time of Leonardo De Vinci argued with him that man would never be able to fly, he could have replied, that birds show it is possible for a heavier-than-air entity to do so, and that with enough time it might be possible to find a way for man to fly. No specific method is being asserted. Nor is the existence of some entity being proclaimed. There's a book called 'The Experts Speak' and it's about "expertology." I highly recommend it. It's a list of debunked quotes by experts, usually experts telling us what is ... and will be ... forever impossible.
A telling example happened in 1957 when the business books editor of the publisher, Prentice Hall, turned down a book on computers because he had it "on the highest authority that data processing is a fad and won't last out the year."
Another example happened in 1981 when Bill Gates reportedly said that "640K [kilobytes] ought to be enough for anybody."
Predicting the future is hard, but it is not ... impossible.
Sound like a rash thing to say? Imagine if it were impossible to predict the future. Imagine if things could act in contradiction to their identities. That'd be chaos and worldwide destruction. So let's take that option off of the table (the absurd notion that you can't predict the future simply by understanding the identity of acting entities).
In order to profit from failed predictions, and get into a position where you can predict the future like a medium or something -- you've got to understand why failed predictions failed. Some of your "it might be possible" argument has hinged on the historical failings of predictions. The upshot is that we need to practice more intellectual humility and either:
(1) suspend judgment about predicting the future (because of noted historical follies) (2) understand the follies as indictments not against being able to see the future, but in thinking wrong about possibility itself
I choose the latter. You use the example of human flight, which shows up on p. 255 of the Experts book. Here is a relevant entry:
Prof. Le Conte at the University of California, 1888
Put these three indisputable facts together:
1. There is a low limit of weight, certainly not much beyond fifty pounds, beyond which it is impossible for an animal to fly. Nature has reached this limit, and with her utmost effort has failed to pass it.
2. The animal machine is far more effective than any we can hope to make; therefore the limit of the weight of a successful flying machine can not be more than fifty pounds.
3. The weight of any machine constructed for flying, including fuel and engineer, can not be less than three or four hundred pounds. Is it not demonstrated that a true flying machine, self-raising, self-sustaining, self-propelling, is physically impossible?
Now, Joseph Le Conte was wrong about predicting the future. There are two ways to react to that:
(1) decry the idea of predicting the future (2) understand why dumb predictions like his often, if not always, fail
I choose the latter. In the case of human flight, the reason that Le Conte was wrong isn't because he lacked supernatural foresight -- it's because he didn't think straight. This is true of most college professors, and especially true of those in California.
His first premise says that no birds weigh more than 50-lbs, so no flight will ever occur at more than 50-lbs. The assumption is that natural selection would have created heavier birds if it was possible -- along with the slightly sillier assumption that 300-lb pterodactyls never existed (i.e., that their bones, as discovered 100 years before Le Conte spoke, must have been a practical joke from God, or something).
So premise one is stupid. It involves wrong reasoning. For instance, why wouldn't natural selection make birds lighter (rather than heavier)? If you fly, isn't lighter better? This "common sense" reasoning entirely escaped Le Conte. But, as we should know by now, college professors aren't paid to think straight (i.e., to exercise common sense).
His second premise is also false (go figure!), due to the use of the word "effective" -- rather than efficient. A few hundred million years of evolution is likely to build efficiency, but effectiveness is easy (for humans) to create. We've got jets that are so much more effective than birds are at flying (though not as efficient). Anyway, Le Conte was wrong again for reasons which someone like me can see upon pure analysis -- rather than having to wait to see the future pan out, one way or the other.
I could have predicted his failure before it even occurred.
So what does this say about my ability to predict whether AI is impossible? Nothing. You can't point to the dumb thinking of some (most?) experts and say to me: "See! Experts have been wrong. Therefore, you couldn't possibly be able to say it's impossible for non-human things to have human-like intelligence -- or to have free will." Well, that argument would be like folks blaming the free market for our current crisis (using "folly" as a floating abstraction, grabbing it from the morally wrong thing we call welfare, and floating it over to the morally right thing we call free market).
The wrongness of past predictions (properly understood like I just showed) has nothing to do with my ability to predict the future. An argument against my argument against AI has to do more than to just parade examples of predictive folly before the jury. In each and every case, I (if my understanding serves me) will be able to show why predictions failed -- and how I could know that they would fail, even before the time for them to actually fail came.
Let me show you how I can predict the future. Grab a pair of dice. Roll them. I predict that you won't get a "thirteen." It's a veridical generalization about what is (is not) possible -- considering the nature or identity of dice. It works for things more complicated than dice, too. It even works for humans.
So, is AI impossible?
Well, I'll stick my neck out and I'll say this much: AI is "as possible" as getting a computer to lactate or perform photosynthesis. If you think that, someday, we will have computers that lactate, or computers that perform photosynthesis -- then it's not contradictory to also think that, someday, we'll have AI. Here is John Searle (Minds, Brains and Science, 31) on the matter:
The reason that no computer program can ever be a mind is simply that a computer program is only syntactical, and minds are more than syntactical. Minds are semantical, in the sense that they have more than a formal structure, they have a content.
[famous example of the Chinese Room thought experiment]
Now the point of the story is simply this: by virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understood Chinese, but all the same you don't understand a word of Chinese. ... Understanding a language, or indeed, having mental states at all, involves more than just having a bunch of formal symbols. It involves having an interpretation, or a meaning attached to those symbols.
And (Minds, Brains and Programs, 86):
Unless you believe that the mind is separable from the brain both conceptually and empirically -- dualism in a strong form -- you cannot hope to reproduce the mental by writing and running programs since programs must be independent of brains or any other particular forms of instantiation. ...
'Could a machine think?' My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason that strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.
The Chinese Room thought experiment shows that you can get the empty "form" of understanding something -- something like the Chinese language -- without having the meaningful "content" of understanding something. Another way to say this is that you can set things up to make it look like you understand ... when you don't actually understand. I think this type of thing plagues animal cognition research, but that is another point altogether.
To be fair, Searle only says that it's "likely" that intelligence depends -- depends for its very genesis -- on neurons, synapses, and neurotransmitters (as found inside human brains). He doesn't say that he knows it depends on that specific biochemistry -- just that there's no good reason to doubt that it depends, wholesale, on that specific biochemistry.
Ed
Edit: When I asked you to grab a pair of dice and see if you can roll a "13" with them -- I meant "normal" dice -- lest Ted comes on in here telling us about some wacky dice that only roll prime numbers, or something like that (as he did in another thread with his note of some crazy coins that can never come to rest on their edge). :-)
(Edited by Ed Thompson on 11/15, 4:45pm)
|
|