About
Content
Store
Forum

Rebirth of Reason
War
People
Archives
Objectivism

Post to this threadMark all messages in this thread as readMark all messages in this thread as unreadBack one pagePage 0Page 1Page 2Page 3Page 4Forward one pageLast Page


Post 60

Sunday, November 16, 2008 - 7:02amSanction this postReply
Bookmark
Link
Edit
Steve,

Okay. Let's outline our common ground and differences.

Your line of argument, from post 33 to post 59, is that -- in the future -- there might be non-human things with "minds" (things that can choose to focus and reason; or that can choose not to focus and reason -- at any given instant).

My line of argument is that "computer program intelligence" is impossible and that your kind of a statement ought to be integrated into a continuum of probabilities for things -- i.e., that it should be compared to the probability of new things that lactate or new things that perform photosynthesis. This, I argue, grounds the claims and steers them away from being completely arbitrary. Otherwise, you might as well make the bold conjecture that money will grow on trees, that up is down, and that you have got to run all day, just to stay in the same place -- because, once you're arbitrary (totally separated from evidence), it's deuces wild, and anything goes ... literally.

As it relates to AI

Your line of argument doesn't really relate to AI, because it's too broad (it doesn't hold context). You're saying it might be possible, but you're not saying how -- whereas the enterprise of AI is more specific, it is the artificial creation of intelligence by man. For instance, if aliens landed tomorrow with the faculties of reason and volition, your line of reasoning would be vindicated -- but it would be irrelevant to the enterprise of AI, which doesn't deal with finding intelligent life, but with creating intelligent life (or dead things that think).

There are two main camps in relation to AI, the symbol-manipulators and the connectionists. The symbol-manipulator camp thinks thinking is analogous to talking or speaking in sentences in our minds. It's this camp that's completely debunked by Searle (see above). The other camp, the connectionists, think that in order to re-create the intelligence in man, you have got to build "machines that had the same causal powers as brains" (just like Searle said you would have to do).

To your credit, the connectionists might make the bold move you postulate of connecting human brains to machines and getting a machine that acts intelligently. But here is the rub on that:

That intelligence wouldn't be truly artificial (created by man).

Ed

(Edited by Ed Thompson on 11/16, 7:05am)


Sanction: 4, No Sanction: 0
Sanction: 4, No Sanction: 0
Post 61

Sunday, November 16, 2008 - 11:24amSanction this postReply
Bookmark
Link
Edit
Ed,

You correctly noticed that I was discussing non-human volition which is much broader than AI. You also noticed that I was discussing this as a possibility, not a certainty, and that I did not claim a specific method. The problem is that you faulted me for this, saying, "...because it's too broad (it doesn't hold context). You're saying it might be possible, but you're not saying how..."

Ed you keep wanting me to parrot some other argument. I am not arguing in favor of AI or any other specific mechanism because there is no evidence that I know of that justifies that argument. I am not arguing in favor of a certainty that volition will arise in some non-human context because that would require an understanding of those properties of volition that do not now exist.

My argument is the strongest that I believe can be logically supported at this time. Because we humans have certain properties, and those properties somehow arise out of what we are, it might be possible in the future to duplicate any of those properties in a non-human form.

If you attempt to shift this to a specific mechanism that you can knock down, it is no longer my argument. If you try to say that this is equivalent to "...up is down, and that you have got to run all day, just to stay in the same place..." you just aren't taking me serious, because those are just logical contradictions and not worthy of you as rebuttals.

It is the nature of what we currently know, our body of knowledge, that constrains what we can say is true - including when we talk of the future. The mistakes we can make are to make predictions for things happening where we don't have evidence that supports that, or we can claim that something can never happen despite having no solid reasoning or evidence to justify that. When there is very little evidence for and no evidence against, and it is a subject that is still mostly unknown, one can only talk of possibilities. Most of what one is asserting, in that case, is about the current state of knowledge. My statement is a recognition of the following facts:
  • Human creativity is inexhaustible,
  • There is no evidence that would support certainty that non-human volition might one day exist,
  • There is no evidence that volition, by its nature can not exist except in humans,
  • There is no evidence that any specific path or mechanism will lead to success.
I maintain that given those facts, you have to take the same position: At some point in the future it is possible that non-human volition will exist. That isn't a very strong claim. It doesn't say very much. But it does say all that is warranted.




Post 62

Sunday, November 16, 2008 - 2:48pmSanction this postReply
Bookmark
Link
Edit
Steve,

Here's a belated response to your original criticism (post 33) of my disposition to disbelieve in AI (post 31):

At this point no one can even conceive of software or any machine that actually envisions alternatives, some that do not exist, and then choses. But what makes it impossible?
Searle showed what it is that makes written software "intelligence" impossible -- the category mistake of thinking enough syntax will get you to semantics.

I agree with Searle that that's what makes intelligent software impossible. Most of the rest of our argument is water under that bridge.

Ed


Post 63

Sunday, November 16, 2008 - 3:38pmSanction this postReply
Bookmark
Link
Edit
Ed,

In that first post of mine that you refer to, I started by saying, "...nothing bars some non-human, at some time in the future, of demonstrating those exact characteristics."

Please note the "non-humans" - NOT JUST software. Then in the sentence of mine that you quote, I include "machine" and NOT JUST software - and that was a sentence responding to Jay's mention of software that wrote software.

You are still not addressing my argument - you are making it appear that I am claiming volition will be achieved by software. FALSE. I am claiming that volition might arise in some non-human form in the future.

No amount of sophisticated argumentation or quoting of this or that philosopher amounts to a hill of beans if you don't address the argument actually made!

Look at the argument made up of those bullet points and the sentences following them in my previous post.

Post 64

Sunday, November 16, 2008 - 5:02pmSanction this postReply
Bookmark
Link
Edit
Steve,

I addressed your argument in post 60. I re-stated your argument clearly. You even said so. You asked for some impossibility and I provided some. Yet you seem frustrated.

If I understand your point (and I do and I showed so) and if I made a clear point of my own -- then what else do you want from this debate?

Ed


Post 65

Sunday, November 16, 2008 - 7:04pmSanction this postReply
Bookmark
Link
Edit
An unspoken argument here seems to be something that might be worded like this:

Are the essential characteristics of humans so special that humans will, someday, make themselves not special anymore -- by creating non-humans "demonstrating those exact characteristics"? Are we so great that we could make ourselves small, by creating a flood of equal, or maybe even superior, entities? Entities either indistinguishable from humans, or our perpetual mentors (if they should agree to help us, rather than to eat us, after we make them smarter than we are).

Just some food for thought.

:-)

Ed

(Edited by Ed Thompson on 11/16, 7:06pm)


Post 66

Sunday, November 16, 2008 - 7:16pmSanction this postReply
Bookmark
Link
Edit
Ah yes, the usual sci-fi gambits... either we conjure a Forbian project, or we get invaded by aliens seeking our deprivation... except perhaps the other intelligent life be one in which there is no conflict among rational beings - meaning, say, silicon life, where there is no treasuring another's world or property... the diversity one...

Sanction: 4, No Sanction: 0
Sanction: 4, No Sanction: 0
Post 67

Sunday, November 16, 2008 - 7:51pmSanction this postReply
Bookmark
Link
Edit
Ed,

I guess it is post #60 you haven't answered yet. It does feel like you aren't answering the argument I'm making. We can just drop it unless you WANT to go on - it isn't anything I'm excited about. I just feel that the people who claim certainty that there will be volition outside of humans AND those who claim certainty that there won't be, are both taking a position that reason won't support.
----------

As to your comment in post #65, I think it will be a serious concern, really. But I won't be here if it comes to pass - that's too far in the future to fit in my lifetime. Those who would be alive then will need to hope that much higher levels of intelligence, residing in non-humans, include much higher levels of ethics and tolerance than we have shown :-)

Post 68

Sunday, November 16, 2008 - 10:39pmSanction this postReply
Bookmark
Link
Edit
Steve,

 It does feel like you aren't answering the argument I'm making.
I agree with the argument you're making about unbounded creativity, etc. That's my answer. I agree.

However, I still have a belief -- a mental disposition -- that we won't ever be creating intelligence from non-biological material, i.e., a totally artificial intelligence.

Ed



Post 69

Sunday, November 16, 2008 - 11:43pmSanction this postReply
Bookmark
Link
Edit
Ed, nothing wrong with mental dispositions. Mine tells me that if we create artificial intelligence it will have a biological basis... then down the road, who knows, because our current integrated circuitry we now use in electronics will be replaced by something... organic switches, molecular memory, nano-engineered circuitry, who knows? It isn't something I think about very much.

Post 70

Monday, November 17, 2008 - 6:40amSanction this postReply
Bookmark
Link
Edit
I just feel that the people who claim certainty that there will be volition outside of humans AND those who claim certainty that there won't be, are both taking a position that reason won't support.

This is the crux of the argument.

However, I still have a belief -- a mental disposition -- that we won't ever be creating intelligence from non-biological material, i.e., a totally artificial intelligence....

The only way that that can be illogical is for you to say that you are philosophically certain while, at the same time, not knowing...


By mental disposition are we saying "philosophically certain" (from your earlier post to Linda)?

jt



Post 71

Monday, November 17, 2008 - 1:14pmSanction this postReply
Bookmark
Link
Edit
JT,

A mental disposition is just a belief. Philosophically certain things are things that you not only know -- but can show how you know (rather than merely justifiedly believe).

Ed


Post 72

Monday, November 17, 2008 - 4:33pmSanction this postReply
Bookmark
Link
Edit
... I may be justified in my belief that we will not ever create truly artificial intelligence (because that belief fits the facts the best) -- but I cannot claim philosophical certainty about it until I understand all of the limiting mechanics; at which time it would cease to be a mere belief about AI and become knowledge about AI.

For instance, I am philosophically certain you won't get 13 from rolling two normal dice because I understand the mechanics of dice rolling enough to know about the boundary of possibility and impossibility. Knowing where this boundary is, I can make a clear demarcation of not only what is possible and what is impossible -- but what will always be possible and will always be impossible.

That's what philosophical certainty affords.

Ed



Post 73

Monday, November 17, 2008 - 6:05pmSanction this postReply
Bookmark
Link
Edit
I tend to use the words a little differently. I won't call something a belief of mine unless I am willing to say that it is probably true. It doesn't have to be certain, but there needs to be a preponderance of evidence.

What I would call a" mental disposition" is more like a leaning in one direction rather than the other, and not supported by enough evidence to call a belief, but also not running counter to evidence - it can't fly in the face of reason. It might have a root in the subconscious, it might have an emotional element, but if I allow it to continue on as a mental disposition, I am saying it isn't unreasonable to do so.

Post 74

Tuesday, November 18, 2008 - 8:36amSanction this postReply
Bookmark
Link
Edit
Ed,

So, in essence, you are just saying that you don't know that AI is either impossible or possible?

jt

Post 75

Tuesday, November 18, 2008 - 9:33amSanction this postReply
Bookmark
Link
Edit
JT,

Right. I only know that certain kinds of AI -- such as the language-based, symbol-manipulation of a computer program -- are impossible.

Ed


Post 76

Tuesday, November 18, 2008 - 6:59pmSanction this postReply
Bookmark
Link
Edit
Ed,

I could probably agree with that, not that I think anyone's waiting for my agreement. For starters, it'd be about the hardest and most wasteful (excessive code) way to accomplish it.

jt

Post 77

Wednesday, November 19, 2008 - 5:42amSanction this postReply
Bookmark
Link
Edit
Here's me finding out I'm talking to a Turing Machine:

[phone rings]

Ed: Hello.

Turing Machine: Hi, sir. I'm calling you to see if you are interested in volunteering for Pres. Obama's new civilian national security force.

Ed: How do I know you're not a machine?

Turing Machine: Huh? Don't be silly. Machines don't have conversations!

Ed: [thinking: mmmmm, this guy's good] Okay. Fine. Tell me about what I'd have to do as a volunteer.

Turing Machine: Well, for starters, you would catalog dissenting public opinions in a computer database ...

Ed: You like "computer databases" don't you? You think they're hot.

Turing Machine: Wuh? Listen, if you still think I'm a machine, then why are we having this conversation?

Ed: Good point.

Turing Machine: So, as I was saying about the ...

Ed: How much do you get paid for this?

Turing Machine: Sir, it's policy not to discuss salary, so can we please ...

Ed: Do you like your job?

Turing Machine: Sir, honestly, I feel that your questions are inappropriate to the discussion at hand.

Ed: Answer one question and I'll volunteer for Obama's civilian stormtroopers.

Turing Machine: Well, okay, I think I could answer just one -- if it will bring you on board. Go ahead.

Ed: Why would I ask if you like your job or not?

Turing Machine: [Does not compute ... Does not compute ... question involves truly understanding another's motives ... question involves truly understanding another's motives]
The two main possible reasons why I would ask "Mr. Turing" if "he" liked his job are inuitive leaps that involve my voice inflection on the phone during key words above. For instance, if I said "Obama's civilian national security program" sneeringly, then I likely think libertarianly. However, if my voice inflection reflected disgust right off of the bat, then the reason I'd ask him if he liked his job would stem from my despise of telemarketing in general.

Ed


Post 78

Wednesday, November 19, 2008 - 9:19amSanction this postReply
Bookmark
Link
Edit
Why did I just get a mental image of a hay truck struck by a speeding freight train?

Post 79

Wednesday, November 19, 2008 - 10:54amSanction this postReply
Bookmark
Link
Edit
Mike,

Why did I just get a mental image of a hay truck struck by a speeding freight train?
Right! THAT is the kind of question you could ask a Turing machine (to figure out if it was human or not).

You totally get it, now (how to beat Turing machines).

:-)

You bring the conversation over to what it feels like to be a human (with the common grievances, colloquial stuff, etc), but you do it tongue-in-cheek just like you did above ... and the thing will start popping its circuits out trying to keep the bluff ongoing.

Ed



Post to this threadBack one pagePage 0Page 1Page 2Page 3Page 4Forward one pageLast Page


User ID Password or create a free account.