| | Robert:
Agreed. A concrete example of, for visual processing. Why, you could say, an example of context modifying behavior, as in "When {Context} and {Event} then {Behavior}...
So, depending on context, our neural nets adjust their weightings to bias towards certain outcomes. "Looking for patterns is generally useful, precise light intensity, not so much." But, we can consciously over-ride those weightings, reprogram our neural nets.
For things like perception of input from senses, I think there are lots of accessible, studyable examples of that.
So, how far up the neural net do such contextual bias go? That is what is applicable, I think, to this thread's discussion of Vegetative Robots and Value, and how we choose to value what we value -- how we choose to weight the various neural net connections that are inputs to higher weighted neural net connections. Sensory inputs are just one class of inputs, important ones, because they inform us of what 'is.'
Those perceptions of reality -- coupled with our imaginings about alternatives -- are themselves inputs into higher order analysis going on in our brains. The context for those hypotheticals, the context for what we call synthesis, is much more fluid, isn't it? That context is not bound by what is, and not even, by what could be, though, what eventually is is bound by what could be...
Imagine if someone's contextual bias was purely supernatural -- as in, not looking for the lion in the grass, but looking for the Magic Spirit in the Wind.
Strike that, we don't have to imagine that happening, there are more examples than we need already...
We're clearly able to imagine contexts in which to bias our thoughts...
regards, Fred
|
|