About
Content
Store
Forum

Rebirth of Reason
War
People
Archives
Objectivism

Post to this threadMark all messages in this thread as readMark all messages in this thread as unreadPage 0Page 1Page 2Forward one pageLast Page


Post 0

Thursday, March 23, 2006 - 6:12amSanction this postReply
Bookmark
Link
Edit
Thanks for the review! Definitely sounds like Kurzweil's exhuberant outlook is continuing, and that this book too will be fun to read. It sounds like he's touting things that are probably technically possible - but trying to extrapolate an exponential trend to determine timeline is never a safe bet.

I really like your comments on singularity and Objectivism. I think the problem of knowledge transfer being so cheap is one we're already seeing in smaller form now, with intellectual property concepts needing to evolve as information becomes more and more clearly a non-rival good. A non-biological entity being conscious? - Absolutely! (and the bloody system of you, the book and the room know Chinese too! :) )

The virtue of productivity, political activism and virtual reality ones I've not encountered before, and are excellent questions. Something to think about.


Sanction: 5, No Sanction: 0
Sanction: 5, No Sanction: 0
Post 1

Thursday, March 23, 2006 - 8:32amSanction this postReply
Bookmark
Link
Edit
There is a vital economic imperative to create more intelligent technology. [...] We will continue to build more powerful computational mechanisms because it creates enormous value. We will reverse-engineer the human brain not simply because it is our destiny, but because there is valuable information to be found there that will provide insights in building more intelligent (and more valuable) machines. We would have to repeal capitalism and every visage of economic competition to stop this progression.

-Ray Kurzweil, "The Law of Accelerating Returns," 2001


I like this guy already.

Post 2

Thursday, March 23, 2006 - 9:21amSanction this postReply
Bookmark
Link
Edit
Thanks for the excellent review. I have been an avid reader of Kurzweil for sometime and have been interested in extropian and transhumanist ideas before I was interested in objectivism.  To me these philosophies are not very different, although extropianism and transhumanism lack philosophical direction, they and objectivism all embrace individual life as the core value.  Some other people have written on uniting these seperate idealogies and I have mentioned this in some previous threads.  The following article is a decent one of the subject.

The Objectivist-Extropian Synthesis
http://www.geocities.com/rational_argumentator/OEsynthesis.html

Also I recommend checking out Ray Kurzweil's online essay

"The Law of Accelerating Returns"
http://www.kurzweilai.net/articles/art0134.html?printable=1

In regards to your review, and your questions.

If the primary choice of every living thing is to live or not, what is the primary choice if man eliminates death?

that implies that the only purpose of life is to struggle against death, and without the threat of death life has no meaning.  The struggle against death is the first purpose of life, once complete, we can move on to other things.  The primary choice then becomes to progress or regress.

If all knowledge can be transferred and learned with no effort, what happens to the virtue of gaining knowledge?

The information still has to be processed and an attempt made to understand it, just as it is today.  You can have perfect memory of something but that does not mean you understand it and integrate it into your life and actions completely.

Michael F Dickey


Post 3

Thursday, March 23, 2006 - 5:21pmSanction this postReply
Bookmark
Link
Edit
* If the primary choice of every living thing is to live or not, what is the primary choice if man eliminates death?
There is no such thing as elimination of death. Life is self-sustaining and self-generating action. Always critical parts can fail, which means life ends. You could make copies of yourself, and you could change yourself, but now, what are you sustaining? Death of what? New possibilities come with new definitions of the terms. What lives? What dies?

* If all of our material needs are eliminated or fulfilled for virtually free, what happens to the virtue of productivity?
Here is your false premise: that there will not be life forms that are as intelligent as you that make decisions that result in your destruction, whether to their benefit or not. You will still have to compete with them. Things that are currently valuable to you, things that enable you to continue living now, will become less important, other things like information, security, trust, and of course resources/control will become more important.

* How important is political activism considering Kurzweil's predictions eliminate the need for most social welfare programs?
Right, its not worth putting much time toward them compared to other things... which other things are important?

* If all knowledge can be transferred and learned with no effort, what happens to the virtue of gaining knowledge?
Information can be transferred and learned with no effort, but its not that one wants to learn tons of knowledge or tons of relationships, but one wants to learn the ones that are both consistent with Reality and worth basing decisions on that effect your life. "All Knowledge" is misleading and false, instead, replace with "communicate faster" and "process information faster".

* Would it be wrong for Objectivists to choose to live most of their lives in virtual reality? What if the virtual environment was the equivalent of Galt's Gulch?
Here is your false premise: that virtual reality is not a part of reality. "Virtual reality" is a misleading term, it should be called "simulation of Reality with another part of Reality". How much time should you spend in simulations? To what extent is it beneficial for you to do so?

* Can a non-biological entity be conscious?
What does "biological entity" mean? Do you mean life form? Of course a life form can be conscious. Can a program be a life form? Of course, I've made such life forms myself. Are they conscious? Of course they are, they just aren't very smart yet.


On nanobots: please don't forget that nano sized things cannot be independently smart. By what neural net or processor will they sense input, process information, make decisions, and control output? Where is their power source? A "nanobot" would be more like a designed cells and viruses-- ones that have very little capability like current individual cells and viruses.

Post 4

Thursday, March 23, 2006 - 5:26pmSanction this postReply
Bookmark
Link
Edit
Michael:

You said:
The information still has to be processed and an attempt made to understand it, just as it is today. You can have perfect memory of something but that does not mean you understand it and integrate it into your life and actions completely.

Kurzwiel addresses this point when considering arguments people make against computers being conscious entities. He uses the Chinese Room argument as an example, which Aaron linked to in his previous post.

Do you think what you said will always be the case? To cite another example Kurzweil used in his book, do you think Deep Blue understood chess better than Gary Kasparov? Probably not in the way we think about understanding, yet it was a better chess player than he was. Why can't this example be extended to other forms of knowledge?

For example, I work as a software engineer. Suppose in the future my brain has access to an external storage device that stores information on how to write computer software for specific problems. Also suppose that my brain has access to a processor and software that can process that information and output a piece of software for any specific problem I give it. Do I still need to understand how to write software in order to produce it?

As someone who loves to learn new things, especially about writing software, it's almost depressing to think that one day we might be able to outsource most of our thinking to computers.


Post 5

Thursday, March 23, 2006 - 6:11pmSanction this postReply
Bookmark
Link
Edit
Dean:

Thanks for posting your thoughts. Although I always had a pretty positive outlook on man's future, I was really blown away by some of Kurzweil's predictions. I'm still trying to digest it all and think about how my beliefs today apply to future situations I haven't even thought about. Your comments were very helpful.



Post 6

Thursday, March 23, 2006 - 6:37pmSanction this postReply
Bookmark
Link
Edit
As someone who loves to learn new things, especially about writing software, it's almost depressing to think that one day we might be able to outsource most of our thinking to computers.
Are you having the external processor do all of the thinking, or are you determining what it considers good and bad? I'd less think of it as some external thing making decisions for me, and more think of it as increasing my own processing and thinking abilities. Well, I guess it depends on the communication I/O rates between the external processors and my brain. If the I/O rate was low, I'd think of it more as another thing doing my thinking. If the I/O was high, then it would be much more like an extension to my brain. Hehe : ).

Post 7

Thursday, March 23, 2006 - 6:43pmSanction this postReply
Bookmark
Link
Edit
I've read two of Kurzweil's books: The Age of Spiritual Machines and this one. If never been amazed, frightened and hopeful all at once like when I've read his predictions of the future. His scenarios of the future is very reasonable; he seems to be the only thinker who dares to say where all our technological development is heading: human immortality, omniscience and omnipotence within our lifetimes.

The only issue I have with Kurzweil is how he glosses of the issue of consciousness, in humans and hypothetically in computers. He takes for granted that level of computing power = level of intelligence = level of consciousness. Conscience is a tough metaphysical nut to crack. How are we, as material beings, able to perceive our material nature? How are our thoughts transformed into actions? When the 8-ball ricochetoff the cue ball, do they become "conscience" of each other in the same way of two people bumping meeting each other?

Another issue is creativity. Computers can be programmed to receive imput. Computers can be programmed to react to input in certain ways to further certain goals. We seen examples of inductive reasoning in computers. (See e.g. 20q.net. I played it using "Objectivism", and the closest it came was "a philosophy" and "a science".) But can computers create their own "input" and "programming" in a meaningful way. Can computers write their own source code to be something completely different than they may have originally meant to be, in the same way that human can shape their philosophy and thinking?

Kurzweil's trump card in the consciousness debate is the potential to reverse engineer the human brain. He argues that if we copy the human brain, we copy consciousness. But this approach begs the question that the brain alone is the seat of human consciousness, or that it doesn't need some supernatural "soul" to jumpstart consciousness. Kurzweil alludes to this possibility himself, citing references that the human brain is actually capable of quantum computing. Commander Data and Robbie the Robot may be harder to build then we think.

However, a merger scenario seems just as likely than independent strong AI, with computers expanding human intelligence to the point where we have all the brainpower of a computer. With calculators and internet databases constantly within our reach, to some extent, we're there already.

Post 8

Friday, March 24, 2006 - 5:23amSanction this postReply
Bookmark
Link
Edit
At the risk of sounding way too detached and sci-fi for a moment, I see in the next 50 years there being an inevitable race:

If BMI (brain machine interface) advances more quickly than AI, then we'll migrate to having the brain-attached math coprocessor, integrated PDA, the wireless Google coprocessor - and generally move towards a cyborg-type 'computers enhancing humanity' scenario. Enhanced humans will survive even as AI then advances as they'll be an integral part of us. However, if AI advances faster than BMI, then humans at some point will be in direct competition with independent machines more intelligent than themselves, and will (eventually) lose.

I have a slight preference for the first scenario, where silicon meets neurons. It's most inspiring to follow news such as neural interface to a joystick, a 1024 pixel camera connecting directly to optic nerves, or a Cambridge researcher experimenting directly with BMI in his own arm.


Post 9

Friday, March 24, 2006 - 5:52amSanction this postReply
Bookmark
Link
Edit
Aaron:

You said:

However, if AI advances faster than BMI, then humans at some point will be in direct competition with independent machines more intelligent than themselves, and will (eventually) lose.


Would it then not be in our best interest to impede the progress of AI until we progress further along the BMI path? Not saying I think the answer is yes...just throwing it out there...

Post 10

Friday, March 24, 2006 - 9:56pmSanction this postReply
Bookmark
Link
Edit
Do you think the AI side is advancing quickly enough that we have to worry about impeding it? :)


Post 11

Saturday, March 25, 2006 - 10:43pmSanction this postReply
Bookmark
Link
Edit
Would it then not be in our best interest to impede the progress of AI until we progress further along the BMI path? Not saying I think the answer is yes...just throwing it out there...
I think there isn't much you can do to slow down or speed up the development of one technology verses the other. I think AI technology (boy do I hate calling it "AI" as if it were artificial) will out-pace neural-computer interface technology. But really all we need are large arrays of human bio-compatible neuro probes : ). What are the material scientists and nano-manifacturing scientists doing?

I think our best bet is to live lives of integrity so that the new life forms won't feel threatened and want to kill you but instead would prefer that you continued to live. Other options for increasing our intelligence include using designed viruses to change our genetics... which could lead to all sorts of possibilities. Surely the machines will still have some uses for some homo sapiens (at least at the start), like we have uses for other life forms.

... Its going to be an exciting future. : )

Sanction: 5, No Sanction: 0
Sanction: 5, No Sanction: 0
Post 12

Monday, March 27, 2006 - 1:49pmSanction this postReply
Bookmark
Link
Edit
I think this is all rather overly optimistic.  Has anyone produced a self-replicating machine yet?  Has anyone even come remotely close?  The answer is no. 

Are there any conscious artificial entities yet?  No - it is not clear that pure computing power will do it, though I don't discount it as possible.  I just think there are too many very large IFs that make such wild predictions almost impossible.  They do make for good books, though, and I prefer the optimism to the dystopian futures all the post-modernists keep predicting.


Post 13

Wednesday, August 5, 2009 - 12:25pmSanction this postReply
Bookmark
Link
Edit
I read the book and I guess I'm a little less optimistic about it than most. Here is my book review about the singularity is near.

kudoz

Post 14

Monday, August 10, 2009 - 10:23amSanction this postReply
Bookmark
Link
Edit
How do we factor in our current model of not working the bugs out of what technology we already got before releasing it?

There is an immense pressure to let stuff fly when it has more or less reached an acceptable level of filth. What happens when we leverage that with ever more powerful concepts?

The gap between what we can do, and what we can do well, seems to grow with technological leveraging.

Look back 50 years as an example; TV broadcasting.

What was it's potential, and what did it become?

Did it really help us become better educated, or merely better instructed?

Then, from where the broad raw materials of this brave new world?

Is the jury in yet on the In-ter-net? To me, Twitter is a sign that the future of even 'consonants' on the web is uncertain; why are they needed?

Remember when cell phones were voice only? Then text only? Then, static graphics? Then, animated graphics? Then, animated graphics with sounds? ...

Flash ahead to the day when cell phones directly stimulate the medullah oblongata. Forget about cognition, we will be sending sensation-o-grams. Wither cognition, and so...from where the next wave of creative effort?

F14, F15, F16, F18, F117... all primarily developed rapidly when 'PONG' was state of the art digital entertainment.

F22... 30 years of development, with all the advantages of the digital revolution. The F22 sure enough flies circles around those other airplanes...but after 30 years of development, it damned well better, and as well, in theory, it benefited from their experience.

But, did we really get that much slower when we got smarter and faster?

The history of the F22 gives me pause when considering Kurzweil's premise. He believes we will overcome the impedence mismatch between the wetbits and our evolving technology.

We haven't yet, and our pressure to plow ahead anyway -- as in 'Vista' -- certainly makes me wonder.

Augmented wet bits? No doubt. But please, if augmentation is going to result in more 30 year F22 development cycles, then it needs some more tuning.

Merging of wetbits? I hope there is the equivalent of 'Virtual PC' for this brave new world, at least until V10 is released. I can't imagine trusting the wetbits to the same tribe that delivers to us the collective fundamentally 'GOTO' based house of cards on the web...

Until then, there is Amazon.com's 'mechanical turks' as a practical example of 'convergence.' "Artificial Artificial Intelligence." A really clever idea...

And, what is it used for? To generate 2 cent ad click throughs for a penny. Like Broadcast TV, like ... everything on the web, it all ultimately ends up in the service of selling ever more 'ExTendz...'

Oh, well.









Post 15

Monday, August 10, 2009 - 11:35amSanction this postReply
Bookmark
Link
Edit
Kurzweil espouses the fatally flawed view of mind as floating abstraction, as if consciousness were the mere management of propositions, mechanically manipulated symbolic messages, but with no perceptual connection to the physical. That the signals of a computer have meaning is not a product of the computer, but a product of our minds which attribute meaning to the symbols.

Searle's Chinese room is not conscious, the computer as mere symbol manipulator is not conscious, we mistake the computer's mindless output for the intention of the computer programmer. Kurzweil thinks that because we have faster processors we will have superior minds. But processors do not perceive. They do not induce. Perception is not a fringe benefit, it is the fount of consciousness. And perception is a potential of the body. You and I can communicate concepts. That does not make our minds identical. We do not share perception.

We cannot achieve immortality by transferring our propositional beliefs to a computer. Your mind is not a mere set of statements. If you transfer only your concepts to a computer, you have not transfered your soul. You have merely achieved a computer simulation of the mind. And just as no computer simulation of a hurricane can water the crops or wet your scalp no computer simulation of your soul can achieve immortality for you.


Post 16

Monday, August 10, 2009 - 11:40amSanction this postReply
Bookmark
Link
Edit
Fred,

Okay... wetbits? Your word? Heard somewhere? I have assumed that you are referring to (another euphemism) our "little grey cells", or perhaps just memes.

Actually, I think it's a catchy description if your referring to brain cells, but I was unable to find anything resembling a definition when I tried google search (although found some other things I'm sure you didn't mean).

If it is your word, it may well be a catchy addition to modern slang.

: )

jt

Post 17

Monday, August 10, 2009 - 1:20pmSanction this postReply
Bookmark
Link
Edit
Might be something from Rudy Rucker...

Post 18

Monday, August 10, 2009 - 4:39pmSanction this postReply
Bookmark
Link
Edit
Jay:

I wish I remembered, but my own wetbits could use a good dose of augmented nonvolatile backup...

I think I've been using this word for almost a decade to describe grey matter, but it's not my word. I just don't remember the source...

For all I know, it was one of Kurzweil's old books, Age of Spiritual Machines maybe. Probably. If it wasn't from that book, it should have been.

I don't think it was from his earlier book, 'Age of Intelligent Machines,' but might have been.

regards,
Fred

Post 19

Monday, August 10, 2009 - 5:16pmSanction this postReply
Bookmark
Link
Edit
Ted:

When I read Kurzweil, I just took it as a very intelligently argued hypothetical. His argument is something akin to, perception is reality. Meaning, at some physical level, we don't actually directly perceive the world around us, it is relayed to the core of our conciousness, whatever that is, via sensors and relay channels, nerves. That same conciousness, in dream state, is stimulated via other pathways, self playback of real and imagined/manufactured stimuli.

We are already at the Kitty Hawk stages of direct stimulation of our brain, replacing natural sensors with artificial sensors. There has been some remarkable work where a grid of stimulating sensors were embedded into a sightless human, and it provided a rudimentary ability to detect rough distinctions between areas of light and dark. Kitty Hawk. An establishement of the basic linga franca of artificial/human interfacing. Like with almost every other similar technology, what is left is ... increases in resolution, the transition from b/w to color, and so on, and then we'll be able to directly stimulate the brain with sensory input.

Not the same as 'immortality', but imagine that technology advanced a century or less. Imaging all five senses directly stimulated... in High Def. At some level of existence, would the 'I' inside of you experience life any less completely if the brain cell stimuli came from an alternative source, an augmented source, and not your 'wetbit' analog nerve pathways? Would life be like a dream state, and at that point, would life be any differently percieved if it was 'real' input from actual augmented sensor inputs, or manufactured/dream state input from high def mulstichannel recorded inputs? (Ala, Matrix.)

If you had the choice of passing with your corporal body at age 100, or living for another 100 years in 'high def' direct stimulation, what would 'you' choose? Could you be productive in that state, newly creative, 'forever young' for another 100 years, or would you choose to cease to be at all?

I think Ray's point is, not everyone will choose to expire, and those who choose to live on in that state, will, in some sense.

I also think Ray's point is as much about life inside the machine as it is the machine inside of life. We want to believe we are more than process, we want to believe in the special nature of our soul; the real antagonism toward the existence that Ray hypothesises about is what it might reveal about the machine-like nature of the processes already inside of us, as we are now.

His point is also, I think, a purely observational one. It doesn't matter what we think about this, if it can be done, it will be done, which, all things considered, probably means it will be done, at some point.

Eyeglasses, false teeth, hearing aids, pacemakers, ... we so far have resisted the urge to annoint a Grand Poobah of Could Be.

Walk into any nursing home, and poll the residents. Let them try being young again -- in their minds, if not body, -- and ask them if they want the Big Dirt Nap, or another hundred years of XBOX 360000... Not everyone will tell you to go jump.

Will creative minds still be able to create in that state of augmented perception? Maybe not until V10.0. Talk about 'second life.' How soon might some opt for that existence over the organic alternative?

What % of our brains is devoted to preprocessing stimuli from our sensory organs, and what % is devoted to percieving that stimuli? We may eventually corral the mind into a tiny box, with much lower maintenance--or, we are just as likely to find that it can't be done, we are not simply processors processing and responding to stimuli from the world and our stored perceptions of the world.

Would 'knowing' that we were human Tivos drive us insane, or would we exist in a perpetual semi dream state, aware but self-protected from insanity, in the same way we apprently are when are dreaming nonsense?

Let me be the first to admit, "I don't know."

But that apparently doesn't stop mankind, in total, from wondering such things, and acting on those ideas. So, if it can be done, it will be done.

But, that is not to say, it can be done.

Post to this threadPage 0Page 1Page 2Forward one pageLast Page


User ID Password or create a free account.