| | Ed:
Will it be like that, I wonder? As in, folks will be given the choice?
Or will it be more like the world depicted in GATTACA, in which there were initially two tiers of folks? In the case of GATTACA, these two tiers were folks who could afford the selective DNA engineering for their offspring, who could then pass the mandatory DNA testing/hurdles in society and live 'normal' lives, and the 'naturals,' some of who faked their way through life by borrowing the DNA of other engineered folks(the parapalegic Jude Law).
In the case of artificially augmented humans, will it appear first as wildly expensive technology afforded by the few and clamored for as a right by the many? So initially, those that can afford it will be given the choice, while those that cannot afford it will not have any choice, until it becomes covered by Obamacare as an augmented human right...
We already see some indication of that with these leading edge technologies. Anyone here having themselves cryogenic frozen so they can be re-animated in the future? Is that a universal right paid for by MEDICARE? Given the current sturm und drang over universal access to -almost- every instance of medical technology known to mankind, I think this debate over 'fairness' only amplifies when the issue of 'augmented humanity' comes out of the labs and starts to hint at a kind of immortality for those who can afford it...
And as Kurzweil points out, at the point that a greater and greater % of the human body becomes augmented, then ... what does it mean to be yet human? He also hypothesizes 'capturing' / recording all memories/contents of the human mind, to augment even memory. And by adding a sufficiently self-programmable(weightable, as in, choosing what to value)neural network, at some point this augmented human begins to approach a zero-wetbits condition, and yet duplicate in every external interaction(in the Turing sense)the non-augmented human being from which it was essentially cloned over time. Is that a kind of immortality? When exactly would be the recorded time of death? And what of 'rights?' of such a silicon form of ... augmented human?
And once evolved, readily replicated/duplicated.
If we could formulate(I don't think we are nearly that smart...yet)a sufficient set of master, as in high level goal seeking, neural nets(value survival/continuation/longevity/prevailing under a polite set of constraints tempered imperfectly by the self interest of the Golden Rule...) which oversaw a completely self programmable(self adjusting weightings to mazimize realization of the master neural net 'values') set of lower level neural nets, would we for all intents and purposes have at least sufficiently modeled 'free will?'
Because in humans, for example, the value of self-preservation is not absolute; it can and does regularly fail(suicide), as do many of those high level 'value' seeking models. Is suicide a failure of a higher level neural net, or a sign of humankinds ability to truly rewire our basic neural net/value seeking circuitry, inserting our high highest value seeking neural net that sometimes concludes "my highest value is achieved by shutting down."
A reptilian set of values would be simple; "Can it eat me? Can I eat it? Lather, rinse, repeat." What makes us human is that our value seeking logic is more nuanced than reptiles. That reptilian form of processing can be accomodated by our reptilian brain stem. The balance of our wetbits as subtlety to those simple values--even as our reptilian value seeking network is still functioning. It is just ... over-ridden by higher order neural nets.
That is, in most of us.
In order to really simulate humanity and all of its fringe failures, is it necessary to deliberately accomodate randomness and imperfection and ill-logic in those highest value seeking neural nets?
And this is where this typical meandering post comes full circle; because we are not identical machines, and some of us really would value the red pill over the blue pill, and vice versa...
regards, Fred
|
|