Urban75 Home About Offline BrixtonBuzz Contact

The Unconscious Mind: Does it Exist?

And you believe ideas to be supernatural? You believe the mind to be supernatural? You believe reason to be supernatural?

I think the mind arises out of processes in the brain. I don't know anything about philosophical notions about reason. I thought you were the one that said the mind came from outside the brain, that, to me, is supernatural.
 
I think the mind arises out of processes in the brain. I don't know anything about philosophical notions about reason. I thought you were the one that said the mind came from outside the brain, that, to me, is supernatural.

It might be supernatural to you, but I don't think anyone else would describe the mind as a supernatural phenomenon. Do you find your self-declared ignorance of philosophy a handicap in discussing such issues? (This is not a rude question, for those who don't know, since there are indeed many people who do not find philosophy useful in understanding the mind).
 
I'm really not interested in anything before about the 20th century in regards to be mind. Post 1950s is preferable. I realise it is important historically, and that their thoughts on the subject helped create what we know now, but there are rather a lot of new things to learn, and more by the day. You've got to pick your battles.

Our messages are crossing, and you're answering questions I asked in previous messages. But anyway, now I know where you're coming from, and I think I'll be able to talk to you in language we can both understand. Let's begin with ideas. Do you believe that ideas can exist outside the brain?
 
Our messages are crossing, and you're answering questions I asked in previous messages. But anyway, now I know where you're coming from, and I think I'll be able to talk to you in language we can both understand. Let's begin with ideas. Do you believe that ideas can exist outside the brain?

Exist in what sense? You can write them down, and someone else can then read them, but that's really just a linguistic annotation of the idea.

I think everything we think, feel, create, imagine and whatever else - all come from the brain. The mind is just another name for whatever goes on in there.
 
Exist in what sense? You can write them down, and someone else can then read them, but that's really just a linguistic annotation of the idea.

But nevertheless, the word cannot be equated with the idea can it? The idea has an independent existence from both the word and the brain, am I right?
 
But nevertheless, the word cannot be equated with the idea can it? The idea has an independent existence from both the word and the brain, am I right?

Not in any sense that I am interested in discussing. Philosophical devices to place fairly standard, common place happenings into isolated, pedestalized constructs is not my game. Which is why I don't read traditional philosophy. I've read some Daniel Dennett, Patricia Churchland (who are really just commentators on science) and I know a little about Descartes, but otherwise, not something I have a great deal of energy for.
 
Not in any sense that I am interested in discussing.

Yes, I've got that bit. But in a sense that you would recognize as real? In other words, you acknowledge that there really are ideas, and that ideas really do exist, even though you don't want to talk about them?

Once we get past this bit, we can turn to the question of how you think it might be possible to avoid talking about them, but I must press you on this point first. Do you accept that ideas exist?
 
Yes, I've got that bit. But in a sense that you would recognize as real? In other words, you acknowledge that there really are ideas, and that ideas really do exist, even though you don't want to talk about them?

Once we get past this bit, we can turn to the question of how you think it might be possible to avoid talking about them, but I must press you on this point first. Do you accept that ideas exist?

Do you have some really great stored debate about the notion of 'ideas' that you're itching to get out? I don't get why it's important. Can you just tell me why you think it's important?

I think ideas are a name we give to one of the many types of things we can think about. In that sense they exist, but they're not different from any other kind of thought.
 
Do you have some really great stored debate about the notion of 'ideas' that you're itching to get out? I don't get why it's important. Can you just tell me why you think it's important?


Because if you admit that ideas exist apart from the brain, and also apart from language, then you have conceded the ontological reality of something nonmaterial.
 
I think ideas are a name we give to one of the many types of things we can think about. In that sense they exist, but they're not different from any other kind of thought.

I'm afraid that they are. Let's substitute the word "concept" for "idea," if that's alright with you? (If it isn't then I haven't properly explained what I mean by "idea." I mean "concept"). Now I would put it to you that a concept is actually very different indeed from any other kind of thought--from an emotion, say, or an appetite. What do you say to that?
 
Concepts are composed, stored, communicated from and comprehended in brains. Given that everything worth saying about them can only be valid in respect to what a brain does with them: how can focusing on a possible abstract way that they can be considered in isolation really be important?

Tell me something about how a concept comes to be formed out a million different sensory perceptions, how we can use analogy to turn the concept into something else, how we just easily understand the causal relations that create the concept? Tell me how you physically write that down in a brain - now there is a topic worthy of respect.
 
Concepts are composed, stored, communicated from and comprehended in brains. Given that everything worth saying about them can only be valid in respect to what a brain does with them: how can focusing on a possible abstract way that they can be considered in isolation really be important?

Very easily. Consider for instance the concept of "love." Now, there we have a concept that has been discussed to great profit and insight over thousands of years without any reference to the brain whatsoever. Do you not appreciate how reductive it would be to discuss this concept with reference to the brain alone?

I'd hope that this much would be obvious. Then we come to the nature of concepts in general. Can human beings think without concepts? Certainly not. Are concepts material? Obviously not. So it would seem that human thought is inevitably mediated through a nonmaterial sphere. So it would seem that reducing human thought to physical reactions in the brain is a bit daft really.
 
I think the mind arises out of processes in the brain.
We're in agreement here.

Incidently, don't worry about phildwyer's attempts to do philosophy. They are about as clever as an AI attempt would be. That is, just a shallow shuffling about of symbols without any apprehension of underlying meaning (hence his insistence on book learning and regurgitation, rather than thinking things through for himself).

And that's really the issue; I can see how an algorithm can shuffle symbols about without any apprehension of underlying meaning. What I cannot see, and I don't thing anyone has the answer to this, is how the shuffling of symbols can ever give rise to the apprehension of meaning.

Unless you think that digital processing kit such as a PC can become conscious, a theory that appeals to algorithms to produce consciousness (rather than to shape it) will seem absurd.
 
And that's really the issue; I can see how an algorithm can shuffle symbols about without any apprehension of underlying meaning. What I cannot see, and I don't thing anyone has the answer to this, is how the shuffling of symbols can ever give rise to the apprehension of meaning.

Unless you think that digital processing kit such as a PC can become conscious, a theory that appeals to algorithms to produce consciousness (rather than to shape it) will seem absurd.

Ah, yes, I'm with you on that one. Like John Searle's 'Chinese Room' thought experiment - if you're just manipulating symbols, at what point does understanding take place, if at all?

That is the crux of the issue, and it's the one I (within my superficial understanding of the subject) believe can be effectively tackled by a new, homogenous theory of learning and memory in the cortex.

I think it goes something like - the cortex is not a computer in the traditional sense, not even a parallel, quantum, 'new special kind of math' type of computer. The fundamental task of the cortex is to pay attention to its sensory inputs, form generalized, invariant memories about common patterns in the input, then use these patterns to guess ahead as to what the next incoming pattern will be.

After the formation of simpler invariant representations, you can continue this process and start to mix together representations from different sensory modalities. This starts to form models of cross sensory associations, and eventually, high up the brains processing hierarchy, leads to highly abstract models of associative relationships between concepts and ideas which are almost entirely removed from the basic spaciotemporal patterns found at the lower levels.

This type of system allows you to constuct complex causal models of the relationships out in the world, and enables the cortex to make predictions about what will happen next, and understand which relationships are important in reality.

Anyway, I could go on all day. But that's the basic idea of how a learning algorithm can 'understand'.
 
Ah, yes, I'm with you on that one. Like John Searle's 'Chinese Room' thought experiment - if you're just manipulating symbols, at what point does understanding take place, if at all?

Searle employed classic conjuror's misdirection in making us think about the clerk in the room.

It's the room, stupid :)

Or: the answer is that the understanding arises in the entire system (viz. the entire material brain and, if Damasio is even half-right, the body as well). We can't join all the dots of the "how"... but there's no reason to introduce any dots outside the system.

I asked Douglas Hofstadter and he agrees with me :)
 
This type of system allows you to constuct complex causal models of the relationships out in the world, and enables the cortex to make predictions about what will happen next, and understand which relationships are important in reality.

But here we have the problem of creating a causal theory of semantics. There is an understanble shift from an informatic theory to a causal theory, but I don't think either work.

My position is that that you don't need a theory of semantics and that Searle's thought experiment is a pseudo-problem albeit a persuasive one.
 
... I asked Douglas Hofstadter and he agrees with me :)
He thinks he can program digital processing kit to be conscious, and made a lot of money off a big fat book promoting the notion. But he hasn't actually published a line of code showing how the trick would be done.

The skeptical reader will readily draw their own conclusions! :D
 
He thinks he can program digital processing kit to be conscious

No he doesn't. He suspects it might be done, eventually.

There are a few papers on the CopyCat - an attempt to model analogy. But yes, the coding project seems to have stalled.
 
I'm perfectly happy with the idea of learning algorithms -- a Bayesian spam filter does just that. So a symbolic system can "understand" in that sense.

The question is this "can we conjure awareness or consciousness just by manipulating symbols?"

Put it that bluntly, and it seems clear the answer must be "why the hell should that work?" There's no theoretical justification for the claim, nor a shred of empirical evidence for it. If anything, the evidence is all the other way. There are many systems that manipulate information, that do not show any trace of awareness. And nor is there any indication how one could write a program that would bestow even the dimmest glimmer of consciousness on digital processing machinery. Despite the many millions of research money that has gone into investigating the hard AI hypothesis.

This is not to deny that functionalism can tell us a great deal about what's going on in the brain; how it discriminates, remembers, and makes associations. It is to say that there is no reason to think that functionalism can tell us how consciousness itself arises; why some brain processes are associated with consciousness, while most are not.
 
No he doesn't. He suspects it might be done, eventually.

There are a few papers on the CopyCat - an attempt to model analogy. But yes, the coding project seems to have stalled.
The central thrust of his popular books is that the consciousness of the brain is achievable on digital processing kit.

Of course attempts to program consciousness (meaning: to evoke it purely by manipulating symbols) will certainly "stall". One may as well try to invoke rain by dancing, imho.
 
It's the room, stupid :):)

This wasn't a signal for a philosophical debate on the nature of Searle's idea. Although your eagerness to tranform it from a straight analogy between traditional symbolic computation & the type of understanding humans are capable of, to one based on philosophical showboating indicates to me that you believe this role is your speciality.

I asked Douglas Hofstadter and he agrees with me :)

Great! Err, who?

My position is that that you don't need a theory of semantics and that Searle's thought experiment is a pseudo-problem albeit a persuasive one.

I don't think you need to deal with semantics at all. Once you start working with the data and you see the way that causal relationships can be modelled using hierarchical multiple stream inputs, it really isn't an issue. I don't think Searle's experiment is psuedo-problem - aside from the fact that it's open enough to allow philosophers to burrow their way into - it's a fairly simple comparison between the way we have traditionally programmed computers to do things, and the way a mammal learns to do things.
 
An acceptance that just manipulating symbols does not capture what we ordinarily understand by, errr "understand". Perhaps, a recognition that the essence of awareness is not to be found in empty symbol manipulation but in the understanding of what those symbols represent, in interpretation.
Ah, yes, I'm with you on that one. Like John Searle's 'Chinese Room' thought experiment - if you're just manipulating symbols, at what point does understanding take place, if at all?
 
To take up Laptop's point about Searle, you're not going to find understanding in an individual neuron. If you isolate a termite from its siblings, it will not be able to build a nest.

I think the secret to the 'problem' of understanding is that we hold our particular experience of understanding in rather high regard. The termite colony understands how to construct extremely sophisticated buildings, but that understanding is held at the level of the colony rather than of the individual. To say that the colony doesn't really understand what it is doing is in the end equivalent to a zombie hypothesis of other humans' understanding.
 
An acceptance that just manipulating symbols does not capture what we ordinarily understand by, errr "understand". Perhaps, a recognition that the essence of awareness is not to be found in empty symbol manipulation but in the understanding of what those symbols represent, in interpretation.

I agree entirely, why else would I have bothered to type all the stuff under that paragraph?
 
Back
Top Bottom