Urban75 Home About Offline BrixtonBuzz Contact

How close is AI?

Jonti said:
Maybe not. But it maintains itself by determinate processes. And we have learned how to interfere with those processes in order both to kill and to cure. As far as those interventions are concerned, the body is an engine whose workings are increasingly well understood.

"engine" is as poor an analogy for bodies as "computer" is a poor analogy for mind
 
Crispy said:
There were no computers until we had a fully formed binary logic figured out. Computers didn't spontaneously emerge from complicated clockwork. Similary, until we have a fully formed theory of mind, true AI will have to wait.

Elegant point but does not necessarily follow.

Theories of mind approach the problem from top-level (AI/[human/animal] intelligence) and binary logic from the very (or almost very) bottom (computers).

It may not be a theory of mind we're looking for but just a better understanding of the processes that spontaneously arise from the interactions between certain arrangements of nerve calls.
 
fudgefactorfive said:
"engine" is as poor an analogy for bodies as "computer" is a poor analogy for mind
I agree "computer" is a poor analogy for mind.

But it has been established that the body works like a machine. Look at the muscular and skeletal systems. Consider the pumping of the heart. Or the synthesis carried out in cells. All these phenomena are controlled by precise scientific laws and can be replicated in a laboratory.
 
Yes, I'm reminded of the Harvard Rule of Animal Behaviour ...
"under carefully controlled experimental circumstances, an animal will behave as it damned well pleases." source

An organism, a human body for example, does indeed function as a pure mechanism according to the Laws of Nature. But the body still does as it damn well pleases (within the physical constraints of its organic mechanisms, of course).
 
Jonti said:
I agree "computer" is a poor analogy for mind.

But it has been established that the body works like a machine

by whom? Descartes? he didn't have a clue

i know of no machine whose structure is determined by its processes in the way that a living creature's is

show me a steam engine that rebuilds itself - show me a car with a cognitive immune system - show me any single machine that has a mind
 
I still think that copying the mind is the most likely route forward. Putting 100 million transitors together and hoping that HAL starts speaking to you is unlikely. But as the technology gets better, building sections of the mind and comparing it to real life behaviour would be a logical way forward. I don't know how accurate the story is, my dad (not the best source of info :rolleyes: ) told me that the heart was only understood once humans could build pumps?

Better understanding of the mind is an important element but perhaps not a road block to progress.

What I find interesting are articles in the newscientist such as this one

http://www.newscientist.com/article/mg16822653.600-blind-to-change.html
(unfortunately its only a tiny chunk of the whole article)

In the article it explains the hi resolution colour image we see is an illusion, large chunks of visual information are ignored or thrown away. So the brain has developed short cuts or cheats to cope with lots of information.

The idea other idea I find interesting is that the subconcious acts before the conconcious mind is aware (results from experiments using Functional MRI), so...........

  • Subconscious mind decides our actions
  • Concious mind makes up stories to justify why our subconscious acted in a particular way.

Obviously it doesn't solve the problem of consciousness, but IF true, previous attempts to understand consciousness have been trying to attack the problem from the wrong direction.

This link is worth a read

http://education.guardian.co.uk/higher/research/improbable/story/0,,1858809,00.html
 
fudgefactorfive said:
by whom? Descartes? he didn't have a clue
Will Erwin Shrodinger or Richard Dawkins, Darwin himself perhaps, do instead?

Sorry ff5, it's you that's being clueless on this issue.
 
fudgefactorfive said:
show me any single machine that has a mind
Like I've said (quoting Erwin Schrodinger) the human body functions as a pure mechanism according to the Laws of Nature. Some material bodies do have minds.

Of course, we really ought to be clear we both mean the same thing by the term mind. I tend to use it in the sense it has acquired in contemporary Philosophy of Mind. That is, an organism has a mind if there is something it is like to be that organism.

I've a feeling you are not going to stop disputing that organisms are governed by scientific laws, but it's not particularly interesting to me to argue the toss about one of the foundational precepts of biological science. So please do not think me rude if I do not respond to you further on this particular issue.
 
As far as we can tell, all the systems you describe - self healing, digestion etc - are autonomous, mechanical features, and all carry on in the seeming absence of brain function (e.g. when someone is in a coma). Stuff like digestion, growth etc goes on separate to the brain's function, so in that respect the human body is a machine - basically it's a mechanism for keeping the brain alive...
 
Which is why I reckon we'll see AB (Artificial Bodies) before we see AI. Big robots with brains in jars :) I'm not kidding :)
 
futurama_nixon_1.jpg
 
Jonti said:
Like I've said (quoting Erwin Schrodinger) the human body functions as a pure mechanism according to the Laws of Nature. Some material bodies do have minds.

Of course, we really ought to be clear we both mean the same thing by the term mind. I tend to use it in the sense it has acquired in contemporary Philosophy of Mind. That is, an organism has a mind if there is something it is like to be that organism.

I've a feeling you are not going to stop disputing that organisms are governed by scientific laws, but it's not particularly interesting to me to argue the toss about one of the foundational precepts of biological science. So please do not think me rude if I do not respond to you further on this particular issue.

you're such a petulant queen

I haven't once disputed that organisms are "governed by scientific laws"
 
fudgefactorfive said:
I don't think Descartes was qualified to lecture you or I on neural networks:

but then why should Descartes' mind/body dualuism have anything to do with how much is known about neural networks or anything to do with brain and nervous system?

So one can propose that there may be found everything that can be known about what occurs in the body to produce thouights and experiences but nothing need be found about how what occurs in the body translates into these thoughts and experiences. That is, not like you can find out how electronic signals translate into images on a computer monitor.

While a mind/body dualist argument could insist that there needs to be something that brings about the tanslation from brain activity to the thoughts and experiences of consciousness and that is also the subject of this consciousness. It is this something that is the mind and it is this that is immaterial and distinct from anything that may be found to occur in the body.
 
Bob_the_lost said:
It's not a simple 1:1 connection, which meant it'd take more than 15 years or so to hit the 100,000 Million transistor mark to get equality of function. Still...

Still, let's say it's only a thousand times more complicated than a transistor, or that a thousand transistors is needed to replicate a single neuron's function. Just stick a thousand chips together.

I've been thinking about this, and my reply and I'm not so sure I was right. There's loads of phenomena that can be simulated with computers; intelligence is one of them, and so is the kinetic theory of heat. So let's think about the simpler example of the atomic theory of matter and the kinetic theory of heat, and see where it leads.

One can simulate the behaviour of a gas, for example, on a computer. It's not even very hard. Out of that simulation will flow values for pressure, volume and temperature. And it gives the right answers, the same as would a real gas. No problems. We've done this by programming our switches to create an artificial gas, so to speak, within the machine.

We've modelled a physical substrate on the computer. It gives us the same answers as we would obtain by experiment. But the substrate, the physical substrate, is still needed for the Real Thing. Our virtual gas contains no real kinetic energy. It is not hot.

This line of thinking suggests that, even if it were possible in principle to model neuronal behaviour using binary switches, the modelling may well not have the same emergent properties as the Real Thing.
 
Jonti said:
I've been thinking about this, and my reply and I'm not so sure I was right. There's loads of phenomena that can be simulated with computers; intelligence is one of them, and so is the kinetic theory of heat. So let's think about the simpler example of the atomic theory of matter and the kinetic theory of heat, and see where it leads.

One can simulate the behaviour of a gas, for example, on a computer. It's not even very hard. Out of that simulation will flow values for pressure, volume and temperature. And it gives the right answers, the same as would a real gas. No problems. We've done this by programming our switches to create an artificial gas, so to speak, within the machine.

We've modelled a physical substrate on the computer. It gives us the same answers as we would obtain by experiment. But the substrate, the physical substrate, is still needed for the Real Thing. Our virtual gas contains no real kinetic energy. It is not hot.

This line of thinking suggests that, even if it were possible in principle to model neuronal behaviour using binary switches, the modelling may well not have the same emergent properties as the Real Thing.
Nah. The virtual gas is hot. It's just that the heat is mapped to something that the computer can sense rather than something that we can.

The bottom line is that if you model it in sufficent detail, you will pick up whatever emergent property you're after.
 
If the simulation contained a creature with a thermometer, it would be hot to them :)
A virtual hurricane wouldn't blow down your house, but it would blow down a house inside the simulation.
 
Jonti said:
We've modelled a physical substrate on the computer. It gives us the same answers as we would obtain by experiment. But the substrate, the physical substrate, is still needed for the Real Thing. Our virtual gas contains no real kinetic energy. It is not hot.

So? We're not comparing a simulaion run on a computer to boiling a kettle, we're comparing a simulation run on a computer to the process of thought that operates in a brain. In a way the can both be thought of as programs.
 
Jonti said:
No, it's really not.

Not unless the conservation of mass/energy is violated.
I'm not following you. As far as I understand it, in your simulation, one of the variables that is being tracked is temperature. The value of this variable is a measure of how much heat the gas has in the simulated world. It's a mapping from the real world, one that represents heat, but in a manner that can be 'sensed' by the simulation.

Am I missing something?
 
Yes, I think so. I think you may be missing the distinction between the representation or the virtual, and the real. The virtual gas is not hot, in any real meaning of the term. It just isn't.

Another response, BtL's, is to concede the point and say "So what? There is a fundamental identify between the sort of thing we are modelling and the sorts of things that go in inside computers anyway." I think that's a better objection, but it does beg the question of whether the two things are in fact fundamentally identical. Of course, they may be, but he has yet to demonstrate that, and in fact introduces a further assumption "the process of thought that operates in a brain can be thought of as a sort of computer program".

I'm not sure what he means by this, tbh, but in the context of AI and this thread, it is likely he has in mind something like "the way the brain acquires and applies knowledge can be thought of as a program of the sort that runs on a computer". That's fine, if a little unclear (I have to wonder what sort of program it is, whether it is compiled or interpreted ...).

But to take the next step, one beloved by proponents of strong AI, and claim that conscious thought would arise within the computer in the course of that simulation, may be to make the same sort of mistake as is involved in claiming that a simulation of heat is really hot.
 
Why are you simulating a hot gas again? To make heat? No. To understand how the gas will interact with other items in the virtual world you've made? Maybe.

As such you are missing something ( ;) ) to everything inside the simulation the gas does have kinetic energy. Samk made this point in a much more suscint manner above, but it bears repeating. You have to remember the why of it all, why are you simulating heat? Why simulate / create artifical intelligence?

The idea behind AI is not to make a clone of a human, it is to make something that simulates human thought patterns closely enough to make it hard/impossible to distinguish between the computer and a human (turing test). This intelligence can then be paired with a couple of supercomputers to rule us all with a fist of silicon.

In other words it does not have to be human intelligence, the simulation is enough, to follow your analogy we just need to know what the temperature would be.
 
... the simulation is enough, to follow your analogy we just need to know what the temperature would be.
Oh yes, that's the whole point of the modelling, after all. If, to spare me from spam, my Bayesian filtering can work out what I would see as spam (and keep up with changes in the spammers' styles) then it's intelligent. Not hugely smart, admittedly, but undoubtedly on the ladder.

And it really is intelligent. It acquires and applies knowledge in the real world. But the simulated heat is not really hot, not in the real world it isn't. That's the distinction I'm making.

I think to some extent we may be talking past each other here.
 
The stuff about artificial neural networks is interesting from a philosophical point of view, as well as for looking at how it's possible to closely simulate lots of different aspects of human information processing, and even more strikingly, forms of impairment.

As I understand it, interest in artificial neural networks in both the AI and the experimental psychology academic community really took off after Rumelhart and Mclelland developed the back-propagation algorithm in 1986, making it possible to train three layer networks, to perform specified tasks.
Briefly, what happens, is you take an input data set, and start usua lly with a randomised network, which produces a random output, and find random outputs for the entire input data set, and then you feed the error score - found by the difference between the target outputs and the actual output, into the backpropagation algorithm, which then adjusts the weights of the links between the different nodes of the network in order to more closely approach the target outputs. After you've done this an enormous number of times, eventually the network finds the ideal set of weights to solve the problem. http://en.wikipedia.org/wiki/Back-propagation

Experimental psychologists have been enormously impressed by the behaviour of neural networks, as the learning gradient approximates closely to human learning gradients, and exhibits similar stage-like transitions, + solving problems of learning generalisation, but also dealing with exceptions. In addition to this, various forms of neural networks have been enormously successful in simulating various well-known phenomena in the cognitive psychology literature, like Reicher's word superiority effect, the Stroop effect, and various other priming and negative priming effects.

Quite apart from this, the neural network analogy provides an obvious analogue to the experience we have of human learning, - We aim to do something, - to create a pleasant sounding note on a violin, to read, to rollerskate, to form letters with a pen, - at first, our efforts go awry, but eventually by a process of comparing our efforts with our target, - with what we're trying to do, we manage to improve our performance, until it approaches perfection.

At least it sounds like an obvious analogue.

But there is a problem. It is widely agreed among neural scientists, cognitive psychologists, and AI specialists, that there is no plausible neuronal counterpart to the teacher signal, and the back propagation algorithm. (Note
that when a neural network is being trained, - the training is done from the outside, by the experimenter, by presentation of a target set of outputs and application of the BP algorithm.) The mechanism by which neuronal weights are adjusted, long-term potentiation, has been well-known for a long time, - but the mechanism by which instructions for how to adjust the weights in order for a real brain network to find the optimum solution to a problem, is implemented in the brain, is not known. What we do know is our own experience of how we learn, - how we train ourselves to solve a new problem.

In some ways considering the evidence from a philosophical point of view, it looks as if the mind and its teacher signal, exist in relation to the brain, and its individual networks, in a similar kind of relationship as the experimenter and programmer exists to the artificial network.

http://en.wikipedia.org/wiki/Neural_networks#History_of_the_neural_network_analogy
http://en.wikipedia.org/wiki/Connectionism
 
.r.u.i.n.e.d said:
The stuff about artificial neural networks is interesting from a philosophical point of view, as well as for looking at how it's possible to closely simulate lots of different aspects of human information processing, and even more strikingly, forms of impairment.

As I understand it, interest in artificial neural networks in both the AI and the experimental psychology academic community really took off after Rumelhart and Mclelland developed the back-propagation algorithm in 1986, making it possible to train three layer networks, to perform specified tasks.
Briefly, what happens, is you take an input data set, and start usua lly with a randomised network, which produces a random output, and find random outputs for the entire input data set, and then you feed the error score - found by the difference between the target outputs and the actual output, into the backpropagation algorithm, which then adjusts the weights of the links between the different nodes of the network in order to more closely approach the target outputs. After you've done this an enormous number of times, eventually the network finds the ideal set of weights to solve the problem. http://en.wikipedia.org/wiki/Back-propagation

Experimental psychologists have been enormously impressed by the behaviour of neural networks, as the learning gradient approximates closely to human learning gradients, and exhibits similar stage-like transitions, + solving problems of learning generalisation, but also dealing with exceptions. In addition to this, various forms of neural networks have been enormously successful in simulating various well-known phenomena in the cognitive psychology literature, like Reicher's word superiority effect, the Stroop effect, and various other priming and negative priming effects.

Quite apart from this, the neural network analogy provides an obvious analogue to the experience we have of human learning, - We aim to do something, - to create a pleasant sounding note on a violin, to read, to rollerskate, to form letters with a pen, - at first, our efforts go awry, but eventually by a process of comparing our efforts with our target, - with what we're trying to do, we manage to improve our performance, until it approaches perfection.

At least it sounds like an obvious analogue.

But there is a problem. It is widely agreed among neural scientists, cognitive psychologists, and AI specialists, that there is no plausible neuronal counterpart to the teacher signal, and the back propagation algorithm. (Note
that when a neural network is being trained, - the training is done from the outside, by the experimenter, by presentation of a target set of outputs and application of the BP algorithm.) The mechanism by which neuronal weights are adjusted, long-term potentiation, has been well-known for a long time, - but the mechanism by which instructions for how to adjust the weights in order for a real brain network to find the optimum solution to a problem, is implemented in the brain, is not known. What we do know is our own experience of how we learn, - how we train ourselves to solve a new problem.

In some ways considering the evidence from a philosophical point of view, it looks as if the mind and its teacher signal, exist in relation to the brain, and its individual networks, in a similar kind of relationship as the experimenter and programmer exists to the artificial network.

http://en.wikipedia.org/wiki/Neural_networks#History_of_the_neural_network_analogy
http://en.wikipedia.org/wiki/Connectionism

But then you could ask how is 'connectionism' any less metaphysical speculation than Descartes' coception of mind.

Think how if you don't accept the Copenhagen or many worlds interpretion of the evidence of quantum physics then there's Bohmian mechanics where the 'quantum poential' is an immaterial cause that could act universally upon all matter in addition to the forces.
 
merlin wood, I should nominate you for a special award by managing to work your theory into such a large proportion of science/theory threads this year :)
 
Crispy said:
merlin wood, I should nominate you for a special award by managing to work your theory into such a large proportion of science/theory threads this year :)

Thanks very much. But then, for one thing, if you think about orthodox physics then it seems that, universally, the world that includes all life on Earth is made just of its smallest parts and the forces that surround them. Whereas one can ask if this is so, how could the world be or remain organised out of these parts as mattter in any form and the energy that radiates from matter?;) :confused:
 
Richard P. Feynman elegantly demolishes all hope of a determinist, hidden variable explanation of wave/particle duality in chapter six of his book, "The Character of Physical Law".

Highly recommended.
 
Jonti said:
Richard P. Feynman elegantly demolishes all hope of a determinist, hidden variable explanation of wave/particle duality in chapter six of his book, "The Character of Physical Law".

Highly recommended.

But then Feynman didn't, in fact, analyse Bohmian mechanics to show precisely what is wrong with this nonlocal causal hidden variables account of the evidence of quntum physics that is consistent wiith a wide range of experimental results . And nor has anyone else done so.

http://bohm-c705.uibk.ac.at/

http://plato.stanford.edu/entries/qm-bohm/

http://en.wikipedia.org/wiki/Bohm_interpretation

And, in fact, from the evidence of quantum physics alone, no-one can either sufficiently support or disprove any of the interpretations that are consistent with the behaviour of that can be uniquely detected and described of quantum objects and that can by no means be observed or directly detected fom objects in motion and cannot be explained as effects caused by the kmown properities of any of the forces.

That's why all these accounts are called interpretations rather than theories.

But Bohmian mechanics is unique amongst the quantum interpretations in that it desribes in detail how, independently of any observer, objects in motion that are both waves and particles could produce the interference and diffraction patterns the are detected in quantum experiments.
 
No, he explained in plain English why you can't have a determinist, hidden variable, explanation of wave/particle duality. Highly recommended.

I take it you haven't read his brilliant lectures yourself?
Richard Feynman was perhaps the most brilliant, iconoclastic, and influential physicist of modern times. The Character of Physical Law, first published in 1965, contains seven brilliant lectures, originally delivered to standing-room-only audiences at Cornell University, that demonstrate Feynman's unique ability to bring his subject to life to the non-physicist.
 
Back
Top Bottom