Urban75 Home About Offline BrixtonBuzz Contact

The Unconscious Mind: Does it Exist?

Can we just forget about The Chinese Room? Or just allow it to mean, freestanding - symbolic rule based computing has proven ineffective at creating general models of the world in the same way that 'comes for free' in a human.
 
To take up Laptop's point about Searle, you're not going to find understanding in an individual neuron. If you isolate a termite from its siblings, it will not be able to build a nest.

I think the secret to the 'problem' of understanding is that we hold our particular experience of understanding in rather high regard. The termite colony understands how to construct extremely sophisticated buildings, but that understanding is held at the level of the colony rather than of the individual. To say that the colony doesn't really understand what it is doing is in the end equivalent to a zombie hypothesis of other humans' understanding.
To my mind the Seale thought-experiment shows that symbolic manipulation and understanding qua awareness or consciousness are orthogonal.

I don't think a termite mound is aware or conscious, even at the level of the colony. Whatever our consciousness is and however it arises, it is somewhat structured and unified. It's hard to imagine processes to do that sort of thing taking place in a termite mound.

Not that a failure of imagination is terrifically strong evidence!
 
I think there is a growing, if uneasy, awareness that symbolic rule based computing is conceptually inadequate as a basis for the generation of consciousness, although it may well (almost certainly does) shape and flavour our awareness.
 
I don't think a termite mound is aware or conscious, even at the level of the colony. Whatever our consciousness is and however it arises, it is somewhat structured and unified. It's hard to imagine processes to do that sort of thing taking place in a termite mound.
I agree, but possibly for slightly different reasons. I don't see understanding per se as key to consciousness. I see conscious experience as the result fundamentally of the fact that we take information from the world around (and within) us and use this to generate images – to run a model of what is going on (or we simply run our models with our own made up data – in dreams for instance). The results of this model are what we observe within ourselves, and form the basis of the information that is laid down in memory (a degree of memory being key to the construction of the unified model in the first place).

If you accept the above, the question then becomes: How do we become observers of our own model? An approach to this question may be found, I think, in Julian Jaynes' ideas – looking at the bicameral mind's protoconsciousness, seeing how people such as schizophrenics operate after their sense of self has disappeared.
 
To my mind the Seale thought-experiment shows that symbolic manipulation and understanding qua awareness or consciousness are orthogonal.

I don't think a termite mound is aware or conscious, even at the level of the colony. Whatever our consciousness is and however it arises, it is somewhat structured and unified. It's hard to imagine processes to do that sort of thing taking place in a termite mound.

Not that a failure of imagination is terrifically strong evidence!

You're averse to the concept of collective consciousness/unconsciousness?
 
I think there is a growing, if uneasy, awareness that symbolic rule based computing is conceptually inadequate as a basis for the generation of consciousness, although it may well (almost certainly does) shape and flavour our awareness.

The way I see it, one of the confusing aspects of the issue is the way the things we observe in our consciousness feel like something. Traditionally there has been an explanatory gap between why red *looks* red, and isn't just some dispassionate symbol based on light frequencies. I think the answer lies between the interaction between the type of models created in the neocortex, and that of the memory/sensory hybrid systems found in phylogenetically older brain structures. The interplay between these systems becomes significantly easier to speculate about when you have a generalized model of cortical function. It really doesn't feel like a problem at all. (but then, I am a sucker for self-delusion)
 
Can we just forget about The Chinese Room? Or just allow it to mean, freestanding - symbolic rule based computing has proven ineffective at creating general models of the world in the same way that 'comes for free' in a human.

I think all it shows is that words don't have meanings which correspond to a fixed data structure. It says nothing about AI IMO as its still perfectly possible to use words without being able to give rigourously precise definitions of what you mean. That is there are no logically ideal languages which can be used to mean anything.
 
(@ cesare) I would not say I have an emotional or intellectual aversion to the concept (allowing that one may be wrong even about one's own motivations!).

Nor would I put a great stress on the need to have an explanatory mechanism for phenomena before they are recognised. Evolution and continental drift were both recognised before we had an adequte explanation of how they work. One has to first collect the data, and then attempt to theorise. It would be profoundly unscientific to reject evidence for any phenomena just because explanations are lacking.

But I don't recall anything from the writings of Jung, for example, that compels one to posit any collective unconscious. The idea might be useful in practice, but it seems to me that other theoretical explanations for the phenomena that interested Jung are possible.
 
I do however have a powerful and reasoned aversion to the idiot Hofstadter's notion that the manipulation of arbitrary symbols can (somehow!) generate consciousness; that an ordinary computer or even abacus can be programmed to be conscious.

I find it comical that scientists and engineers who would otherwise pride themselves on their hardheadedness and scientific approach get suckered by the notion. It's a magical-symbolic theory of consciousness, one that is devoid of any reliance on any sort of underlying physical process.
 
Incidently, don't worry about phildwyer's attempts to do philosophy. They are about as clever as an AI attempt would be. That is, just a shallow shuffling about of symbols without any apprehension of underlying meaning (hence his insistence on book learning and regurgitation, rather than thinking things through for himself).

Heh. You should try a bit of "book-learning" yourself Jonti. If you did, you'd discover that your ideas about consciousness are basic assumptions from thousands of years ago, and that lots of people have said interesting things about them since then. This would prevent you from stating the freaking obvious all the time and embarassing your own arguments. Now I must get back to my "book-learning."
 
(@ cesare) I would not say I have an emotional or intellectual aversion to the concept (allowing that one may be wrong even about one's own motivations!).

Nor would I put a great stress on the need to have an explanatory mechanism for phenomena before they are recognised. Evolution and continental drift were both recognised before we had an adequte explanation of how they work. One has to first collect the data, and then attempt to theorise. It would be profoundly unscientific to reject evidence for any phenomena just because explanations are lacking.

But I don't recall anything from the writings of Jung, for example, that compels one to posit any collective unconscious. The idea might be useful in practice, but it seems to me that other theoretical explanations for the phenomena that interested Jung are possible.

There was quite an interesting thread here but it petered out.
 
I find it comical that scientists and engineers who would otherwise pride themselves on their hardheadedness and scientific approach get suckered by the notion. It's a magical-symbolic theory of consciousness, one that is devoid of any reliance on any sort of underlying physical process.

I agree that you can't do it with standard symbolic computing. But I see no reason that you can't emulate the process that allows this consciousness using a regular computer. I also think this might be possible to simulate this in a canonical fashion, without having to resort to full physical simulations of neurotransmitter interactions, dendritic morphology, cytoarchitecture etc.

But the process of actually creating the consciousness is still going be a massive task. Mammals have a vast structure for learning, huge amounts of sensory inputs, and a very effective region to region connectivity won out through hard evolution. If you want your consciousness to really be like a human's, you're going to have to emulate the activity produced by subcortical regions like the amgydala in order to get a trained cortical model of emotions - total nightmare. On top of this, you're going to have to spend at least as much time on training the whole thing as it takes to raise a human to a functional level. I'm not sure it'd really be worth it.
 
It's still an interesting and ongoing debate, one that recurs fairly often on these boards in different guises. And of course, one can emulate almost any system on a general purpose computer -- that's what they're for! To emulate is not to recreate, mind. A digital weather simulation will not make it rain inside the 'puter!

But to program a digital simulation one first has to understand the thing one is trying to emulate. Do we really understand what is going on during the experience of subjectivity (as distinct from during the processes of discrimination, categorisation, recognition, and so on)?

According to David Chalmers, here, the conceptual point is that the explanation of functions does not suffice for the explanation of experience. This basic conceptual point is not something that further neuroscientific investigation will affect.
 
Back
Top Bottom