Urban75 Home About Offline BrixtonBuzz Contact

How close is AI?

Bob_the_lost said:
It's neat, but it's not intelligence.
i didn't say it was i said it was a precursor to intelligence, the ability to question what you are where you are and simplisitic problem solving... as i said this is pretty much meaningless unless you are going to define the terms of what is classed as intelligence...
 
atitlan said:
You don't seem to be considering the possibilty of stumbling onto a 'black-box' solution to AI.

It may be possible, especially experimenting with biological systems, that you could get something that gives consistent intelligent behaviour in a controlled manner without understanding how the internal processing works in detail.
I'm not considering it. Black box is what you call something you don't know the internal workings of but you know the overall behaviour. That's what neural networks do. One of the main problems with implementing them is that you're never 100% sure how it will behave (99.9% but never 100%).

Someone still has to build the box after all. Unless you use an AI system to build it, or more likely build the one that builds it. Either way it's not practical at the moment. As far as i know it's not even been sketched out on paper let alone attempted to be implemented.
 
Bob_the_lost said:
Biological computing is still computing. New software is still software. If you're not pursuing AI or trying to use it for an application then you won't be using learning subroutines or self modifying code. How exactly intelligence can be coded i do not know, how it could be done by accident is beyond my imagination.
Someone could be messing around with a biological system in an effort to develop a new drug or understand a psychological process and they might start seeing it behave in a wierd way. The same to apply to studying and interacting with any complex system that involved various feedback mechanisms. This might not amount to 'coding' especially at the smallest level of brain chemistry - it might just amount to realising that ceratin inputs led to certain outputs or system behaviour without really understanding how it happened in detail.

Maybe you think that AI has to consist of concrete programmes or routines, but other people might take a wider view and say that if a system is bevaing in a seemingly intelligent way, and if it can be influenced, then it doesn't actually matter if the details of what is going on are not fully understood: the output is "artificial" at least in part because of human input and the syetem seems to exhibit "intelligence" of some sort - hence "AI". I could see this as being possible in the field of people messing around with biological systems as part of experimenting with new drugs or analysing diseases.
 
TeeJay said:
Someone could be messing around with a biological system in an effort to develop a new drug or understand a psychological process and they might start seeing it behave in a wierd way.
And in most cases they would move on to something they understand better.

How many petri dishes grew mold around which no bacteria grew before someone realized the significance of it and discovered penicillin.

And of all the weird and complex biological processes in the world, how many are intelligent (at least in any way we recognise)?
 
If you asked someone what 12983463 x 103398273, and they promptly responded with 1,342,467,6. Wouldn't they be considered intelligent?
But if a computer does it, it isn't. Unless we're defining intelligence as the ability to make intuitive leaps to arrive a solution that isn't apparent from the initial set of data. If we are, then isn't that easily summarized as comparision and contrast of stored information.

maybe this is a philosophical question. * hope 118 118 comes along with some insight. *
 
something as complex as the human mind, including imagination, dreaming and art is surely beyone human craft. If you want to make something like this, just have a kid and program it yourself.
 
Bob_the_lost said:
I'm not considering it. Black box is what you call something you don't know the internal workings of but you know the overall behaviour. That's what neural networks do. One of the main problems with implementing them is that you're never 100% sure how it will behave (99.9% but never 100%).

Surely the fact that you can't predict 100% what the behaviour will be is a side-effect of true AI that would have to be accounted for.

Any artificial system exhibiting real intelligence must have the capacity to be creative, such creativity means that it will sometimes surprise you with it's conclusions.

I don't believe we have the capacity to design an AI system without some form of 'black box' technology - partly because we do not understand the nature of intelligence enough to break it down into programmable algorithms and partly because I don't think the nature of creative thought is approachable in algorithmic terms.
 
TeeJay said:
Someone could be messing around with a biological system in an effort to develop a new drug or understand a psychological process and they might start seeing it behave in a wierd way. The same to apply to studying and interacting with any complex system that involved various feedback mechanisms. This might not amount to 'coding' especially at the smallest level of brain chemistry - it might just amount to realising that ceratin inputs led to certain outputs or system behaviour without really understanding how it happened in detail.

Maybe you think that AI has to consist of concrete programmes or routines, but other people might take a wider view and say that if a system is bevaing in a seemingly intelligent way, and if it can be influenced, then it doesn't actually matter if the details of what is going on are not fully understood: the output is "artificial" at least in part because of human input and the syetem seems to exhibit "intelligence" of some sort - hence "AI". I could see this as being possible in the field of people messing around with biological systems as part of experimenting with new drugs or analysing diseases.
I don't think those scenarios are at all plausible. To revert to my earlier example it'd be like the pump at shaft 6 taking off and crossing the channel unasisted.

Atitlan: You're going for the same idea as TeeJay, that somehow the black box will appear fullyformed (or half way there) and stop us needing to understand it. That is neither how black boxes work, nor do i think it's likely.

For AI i think we're stuck with the option crispy detailed, build an artifical brain or simulation of one and then start playing with the model till we understand what makes it tick. Muser's (and others) points about what intelligence is are well founded, but let's say we're talking about an artifical person to all intents and purposes. Otherwise things like ES are capable of intelligent thought already.
 
muser said:
If you asked someone what 12983463 x 103398273, and they promptly responded with 1,342,467,6. Wouldn't they be considered intelligent?
But if a computer does it, it isn't.
The human learnt how to do that arithmetic, the computer was programmed to.
 
It's a good question, that would be much clearer if we could arrive at useful definitions of either "artificial" or "intelligence". Personally, I'm with Bob. We're so far off it's not even worth considering as a practicality. Current "AI" is probably about as intelligent as a beetle. Something that gets jokes and has an inexplicable mistrust of the colour blue? Not in my lifetime.
 
Wintermute said:
It's a good question, that would be much clearer if we could arrive at useful definitions of either "artificial" or "intelligence"
'The ability to efficiently recognise a broad range of patterns in itself and its environment and use them to achieve its goals.' seems like a fairly good definition of intelligence to me. Trying to define consciousness, OTOH is more awkward.

There's a bunch of other definitions of intelligence here
 
Wintermute said:
It's a good question, that would be much clearer if we could arrive at useful definitions of either "artificial" or "intelligence". Personally, I'm with Bob. We're so far off it's not even worth considering as a practicality. Current "AI" is probably about as intelligent as a beetle. Something that gets jokes and has an inexplicable mistrust of the colour blue? Not in my lifetime.
Bob's dead wrong though. AI, in one form or another, is everywhere already. It's just that the "intelligence" is generally deployed in a fairly domain specific and limited way. Intelligence in this domain just means that the machine has the ability to solve problems that it hasn't been explicitly told the answer to. It's not at all limited to emulating the human brain.

In the field of machine learning, you've got things like SWARM technology, genetic algorithms, neural networks and a whole host of other techniques for finding solutions in intractable search spaces. Expert systems are pretty old at this stage too.

Other fields which employ AI in one form or another are semantic web languages (ontologies), multi-agent systems, autonomics, cybernetics and so on.

One of the conclusions of the simplistic, statistical oriented and connectionist approaches to artificial intelligence from the 1990's is that intelligence isn't a single thing - there's a whole host of complex processing that goes on in a multitude of different ways to produce something as complex as human behaviour. These have been broken down into more tractable problems which legions of researchers are beavering away at solving as we speak.

It's also quite possible that intelligent machines could evolve under their own steam without us understanding how they manage it properly. Still, from our current understanding it appears that we are still an awfully long way away from creating something that can properly emulate a human brain - it's not the number of neurons that is the problem, it's the fact that they are highly structured entities in a functional sense. We are just now trying to master some of the mini-functions that are incorporated in them, fitting them together is a question for another century in all probability.
 
Flavour said:
them chess computers have been sneakily getting a bit big for their boots, if you ask me
However, even though the best of them are now better than the best human players, they chess they play is still a little distinctive. It is possible for instance to present them with certain positions in which they won't understand what's going on.
 
I had a go on one of the latest 'Turing' toys.

It's neural-net based and was 'educated' by all the people logging onto the site and talking to it.

It's outrageously haughty, rude and camp. :cool:
 
gurrier said:
Bob's dead wrong though. AI, in one form or another, is everywhere already. It's just that the "intelligence" is generally deployed in a fairly domain specific and limited way. Intelligence in this domain just means that the machine has the ability to solve problems that it hasn't been explicitly told the answer to. It's not at all limited to emulating the human brain.
Yes, intelligence on the level of insects or thereabouts;)
Even disk caches are a step in the direction of adaptiveness in a small way, there is a huge range of sophistication...


In the field of machine learning, you've got things like SWARM technology, genetic algorithms, neural networks and a whole host of other techniques for finding solutions in intractable search spaces. Expert systems are pretty old at this stage too.
(most types of) neural nets aren't capable of processing complex structures, only curve fitting. Expert systems are pretty useless for learning. Genetic algorithms are the closest thing to genuine creative, open-ended learning on a computer, but will need faster computers to become widespread.

Other fields which employ AI in one form or another are semantic web languages (ontologies), multi-agent systems, autonomics, cybernetics and so on.

One of the conclusions of the simplistic, statistical oriented and connectionist approaches to artificial intelligence from the 1990's is that intelligence isn't a single thing - there's a whole host of complex processing that goes on in a multitude of different ways to produce something as complex as human behaviour. These have been broken down into more tractable problems which legions of researchers are beavering away at solving as we speak.
And depending on whether the problems have been broken down in the right way, the process could be very fast or very slow.

It's also quite possible that intelligent machines could evolve under their own steam without us understanding how they manage it properly. Still, from our current understanding it appears that we are still an awfully long way away from creating something that can properly emulate a human brain - it's not the number of neurons that is the problem, it's the fact that they are highly structured entities in a functional sense. We are just now trying to master some of the mini-functions that are incorporated in them, fitting them together is a question for another century in all probability.
 
Why do the link to point out common knowledge, then.

We're not all morons you know.

You may be interested to learn that the oceans of the world have lots of salt dissolved in them.
 
8ball said:
I had a go on one of the latest 'Turing' toys.

It's neural-net based and was 'educated' by all the people logging onto the site and talking to it.

It's outrageously haughty, rude and camp. :cool:
One problem with the Turing test is that it turns out that you can do pretty well by just using some very simple tricks and techniques to appear as if you're understanding the conversation - picking a noun from a sentence and asking a question about it, for example, or starting sentences with "yes, but" and going on to say whatever you were going to say anyway (Turing obviously didn't study the interview techniques of politicians). A few of these heuristic tricks put together can go a long way towards convincing a naive interlocutor that the machine is a person. The problem is that to get beyond the 90% horizon and convince somebody who knows what to look for, you need vast complexity. In most fields of AI it's a similar situation - simple heuristic tricks and techniques give you 90%, but any advance on that requires mind-boggling complexity.

For example, a simple way to identify a machine interlocutor in the Turing test is to make obvious references to historical context, the sort of stuff that any human might be expected to know about, but a machine won't unless it's been explicitly told - the second world war, the world cup final, that sort of thing.
 
Agree totally, and the Turing toy wasn't actually convincing as a human being, but an interesting and amusing experiment.


I think when we start getting close to AI we probably won't recognise it at first becuase it will probably be part of a dedicated functional system doing traffic management or something.
 
gurrier said:
(Turing obviously didn't study the interview techniques of politicians).

Good point.

I think it's important to remember that the first time Alan Turing mentioned something like the Test, it was in the context of deciding whether your interlocutor was a man or a woman, using only a Telex machine.

Bit like here, really.





I'm a panda in real life.
 
gurrier said:
One problem with the Turing test is that it turns out that you can do pretty well by just using some very simple tricks and techniques to appear as if you're understanding the conversation - picking a noun from a sentence and asking a question about it, for example, or starting sentences with "yes, but" and going on to say whatever you were going to say anyway (Turing obviously didn't study the interview techniques of politicians). A few of these heuristic tricks put together can go a long way towards convincing a naive interlocutor that the machine is a person. The problem is that to get beyond the 90% horizon and convince somebody who knows what to look for, you need vast complexity. In most fields of AI it's a similar situation - simple heuristic tricks and techniques give you 90%, but any advance on that requires mind-boggling complexity.

For example, a simple way to identify a machine interlocutor in the Turing test is to make obvious references to historical context, the sort of stuff that any human might be expected to know about, but a machine won't unless it's been explicitly told - the second world war, the world cup final, that sort of thing.


Ahhh now I understand why the computer in 2001 (teh book) was called HAL (Heuristic algorithm summat or other)
 
What about the point of singularity as it's called. It's the theory that once computers reach a level of processing power and we use those computers to design other machines. We start losing sight of what's possible as the complexity and ability of the programes the computers write themselves exponentially increase.

And fundementally is organic intelegence actually mimickable in binary. Or even if it were possible. Would true AI just not be something subtly different. Just an extremmely complex program.

hmm.
 
The idea that we'll be left behind by computers is a bit silly, I think.

I think it's more likely we'll adopt more inorganic parts (initially for medical reasons and later for convenience) and computers will become more organic and we'll just have 'people'.

The idea that they'll 'overtake' us in some way is just a good way of keeping Hollywood in plot-fodder for bad (and a some good) sci-fi flicks.
 
About 5 years ago I heard about some Canadian scientists I think, who had got 6 leach nurons to perform basic calculations. Then there's this field of nanno computing. Which I don't really understand.

Speaking of bad SciFi. I had a half baked idea the internet would give rise to a machine conciousness. Utilising nodes and distributed processing power to replicate and grow. Probably already been written about several times.
 
8ball said:
The idea that we'll be left behind by computers is a bit silly, I think.
It depends on what you mean by "left behind". We're already in a situation where the complex inter-dependencies of, for example, managed network devices in telecommunications companies can't really be understood by people. Most of the new management technology offers only statistical guarantees (e.g. it will do the right thing 99.99% of the time, the rest of the time it will decide to do something weird) and can't actually say what the system will do or why in any particular situation. Mobile multi-agent management architectures are probably the best example - you have lots of simple programmes which migrate around the network doing stuff and talking to each other - there is no overall plan - the coordination is an emergent property of all the decisions taken by the agents. The most you can do to change their behaviour is to fiddle with the goal weightings, you can't actually tell any individual agent what to do.

It makes such systems hard to sell though. "No, I don't have a clue why they're doing that stuff and I can't tell them not to, but it works 99.99% of the time". Network managers don't seem to like such statements. ;-)
 
gurrier said:
"No, I don't have a clue why they're doing that stuff and I can't tell them not to, but it works 99.99% of the time".

Network managers don't seem to like such statements.

:D
 
Back
Top Bottom