The stuff about artificial neural networks is interesting from a philosophical point of view, as well as for looking at how it's possible to closely simulate lots of different aspects of human information processing, and even more strikingly, forms of impairment.
As I understand it, interest in artificial neural networks in both the AI and the experimental psychology academic community really took off after Rumelhart and Mclelland developed the back-propagation algorithm in 1986, making it possible to train
three layer networks, to perform
specified tasks.
Briefly, what happens, is you take an input data set, and start usua lly with a randomised network, which produces a random output, and find random outputs for the entire input data set, and then you feed the error score - found by the difference between the target outputs and the actual output, into the backpropagation algorithm, which then adjusts the weights of the links between the different nodes of the network in order to more closely approach the target outputs. After you've done this an enormous number of times, eventually the network finds the ideal set of weights to solve the problem.
http://en.wikipedia.org/wiki/Back-propagation
Experimental psychologists have been enormously impressed by the behaviour of neural networks, as the learning gradient approximates closely to human learning gradients, and exhibits similar stage-like transitions, + solving problems of learning generalisation, but also dealing with exceptions. In addition to this, various forms of neural networks have been enormously successful in simulating various well-known phenomena in the cognitive psychology literature, like Reicher's word superiority effect, the Stroop effect, and various other priming and negative priming effects.
Quite apart from this, the neural network analogy provides an obvious analogue to the experience we have of human learning, - We aim to do something, - to create a pleasant sounding note on a violin, to read, to rollerskate, to form letters with a pen, - at first, our efforts go awry, but eventually by a process of comparing our efforts with our target, - with what we're trying to do, we manage to improve our performance, until it approaches perfection.
At least it sounds like an obvious analogue.
But there is a problem. It is widely agreed among neural scientists, cognitive psychologists, and AI specialists, that there is no plausible neuronal counterpart to the teacher signal, and the back propagation algorithm. (Note
that when a neural network is being trained, - the training is done from the outside, by the experimenter, by presentation of a target set of outputs and application of the BP algorithm.) The mechanism by which neuronal weights are adjusted, long-term potentiation, has been well-known for a long time, - but the mechanism by which instructions for how to adjust the weights in order for a real brain network to find the optimum solution to a problem, is implemented in the brain, is not known. What we do know is our own experience of how we learn, - how we train ourselves to solve a new problem.
In some ways considering the evidence from a philosophical point of view, it looks as if the mind and its teacher signal, exist in relation to the brain, and its individual networks, in a similar kind of relationship as the experimenter and programmer exists to the artificial network.
http://en.wikipedia.org/wiki/Neural_networks#History_of_the_neural_network_analogy
http://en.wikipedia.org/wiki/Connectionism