Urban75 Home About Offline BrixtonBuzz Contact

How close is AI?

AI is a strange and complex thing

there have been programs set up that use natural language processing into making you belive you having a conversation with a person but that misses the idea of the turing test in my opinion

to me turing was showing that there was no physical test for sentiace and that one of few ways to test a computer was to explore it's intelectual capacity through debate. and via this debate decide wherther the information it was returning was just what information it stored or some form of sentiant though process
 
At the moment I think AI will be practical, once we can make a synthetic copy of the human mind. Beyond that, may be something radically different will appear, in the same way birds are different from planes, planes might not be as efficent but they're a lot faster.

Semiconductor technology, Intel's lastest chip has 582 million transistors. Wikipedia suggests there's 100, 000 milion Neurons in the human brain.
Number of transistors double every 18 months.
So....................................;)
 
All the talk about "AI" reminded me of this:
Demon_Seed_1977.jpg

:D
 
AI is miles off, miles and miles and miles.

Right now the best we can do is somehting called an expert system. Or in english a program or serries of programs that pretend to be an expert on a particular topic (like which method of bracing should i use to reinforce this oil rig considering the location, sea states and age of the design?). On the down side they are only good at what you teach them to do, and the cost to develop them goes up exponentially when you want to expand their abilities.


There's stuff like linguistic representation, fuzzy logic, neural networking, learning algorythms and all sorts of other very cool sounding words. However they all boil down to the inability to make an artificial idiot, let alone something that can be described as intelligent.
 
We will be able to build an artificial brain once we know how the real thing works. Which we have only just started. Ages yet. But possible, nay inevitable. (as long as we don't blow ourselves up etc)
 
Crispy said:
We will be able to build an artificial brain once we know how the real thing works. Which we have only just started. Ages yet. But possible, nay inevitable. (as long as we don't blow ourselves up etc)
But isn't it theoretically possible that someone will build or programme something that starts behaving in a 'brain-like' or 'intelligent' way, but which has a very different structure to the human brain? If so then AI might arise suddenly and before we have fully understood the human brain. It might even arise without us understanding the new machine or programme that is producing the 'intelligent' output or behaviour.
 
TeeJay said:
But isn't it theoretically possible that someone will build or programme something that starts behaving in a 'brain-like' or 'intelligent' way, but which has a very different structure to the human brain? If so then AI might arise suddenly and before we have fully understood the human brain. It might even arise without us understanding the new machine or programme that is producing the 'intelligent' output or behaviour.
No.

Well, yes it is theoretically possible. But it's really not in the cards today, sort of like expecting transatlantic powered flight from 19th century steam power.

As for "suddenly arise" that's not going to happen, AI is something we're working at, and not making much headway either. A new method of predicting the optimum time to toast your bread is not going to evolve into skynet. The scale you're talking about is beyond huge, nor is it likely that a learning algorythm is going to suddenly start going "i think therefore i am". They don't work that way.
 
Bob_the_lost said:
No.

Well, yes it is theoretically possible.
So that's a yes then? ;)
But it's really not in the cards today, sort of like expecting transatlantic powered flight from 19th century steam power.
I didn't put a time on it - just pointing out that it is not necessarily going to follow after the human brain has been 'worked out'.
As for "suddenly arise" that's not going to happen, AI is something we're working at, and not making much headway either.
A lot of scientific discoveries and breakthroughs 'suddenly arise'. There is no way you can say that it is 'not going to happen': it might be that some new system, machine, biological construction or software started showing strange and unexpected new results. It could even arise completely accidentally outside of a AI-research related context. We really don't know what will happen in the future.
 
aren't hey building rovers for mars which can work out which bit is broken and what they should do in terms of movement to continue progressing accross the surface ie which bit is broken why is it broken how can i move from here...

it can question itself and work thigns out which is a form of ai in that this self aware enough to assess and reevaluate it's status... it was n new scientist a few weeks ago i think...
 
Yep. they're doing some clever things. However, the problem of genuine AI is such a different one to the ones they're solving now, such things do not actually get us any closer.

TeeJay, something as complex as the human brain would be noticeable from the outset. There's nothing we're trying to do on anything near that order of complexity, so there really is nothing we're doing that might 'accidentally' 'wake up'.
 
Crispy said:
Yep. they're doing some clever things. However, the problem of genuine AI is such a different one to the ones they're solving now, such things do not actually get us any closer.

TeeJay, something as complex as the human brain would be noticeable from the outset. There's nothing we're trying to do on anything near that order of complexity, so there really is nothing we're doing that might 'accidentally' 'wake up'.
surely the ablity to question ones exisitance and then quantify it no matter how ever limtied that is at present is a good step forward to establishign an intelligence whcih will be able to reason for itself and there for make descsions...

It depends surely on waht fucntions we wish AI to emulate?

is it pefect 20 20 vision and ablities to run jump climb trees or is it to decipher the hysenberg theroium and calculate pi to it's infinate point... or do we want AI which is as limited as human intelligence? or as flawed? or are we seeking to create a better version of HI
 
TeeJay said:
But isn't it theoretically possible that someone will build or programme something that starts behaving in a 'brain-like' or 'intelligent' way, but which has a very different structure to the human brain? If so then AI might arise suddenly and before we have fully understood the human brain. It might even arise without us understanding the new machine or programme that is producing the 'intelligent' output or behaviour.
well this seems to stem from the concept that all intelligence is reisidual with in the brain there is a building level of evidence which propotes to have found intellignece centres in other areas namely the heart which also store other information ...

There was summit n Channel 4 abotu this recently (again i'm tryign to find sources which aren't hat stand...)
 
GarfieldLeChat said:
surely the ablity to question ones exisitance and then quantify it no matter how ever limtied that is at present is a good step forward to establishign an intelligence whcih will be able to reason for itself and there for make descsions...
Oh that would be, but no one's managed anything like that. What you describe is an expert system, it will eliminate possibilities based upon a list of possible options and results from sensors, that's not self awareness. It'll learn, but again i can code that in 5 minutes without even resorting to fuzzy logic code. It won't evolve, it won't question, it'll just do.

It's nothing more than a big "if then" program, or at best a fuzzy logic implementation of the same.

It can only do what it's told to do, and fault finding and path finding are both rather small money in this realm.

TeeJay, you're not getting it. This ain't science baby, this is engineering. As for spontaneously generating, no. This won't happen in someone's garage. Even little expert systems can chew through CPU time at a rate you wouldn't belive.
 
Bob_the_lost said:
Oh that would be, but no one's managed anything like that. What you describe is an expert system, it will eliminate possibilities based upon a list of possible options and results from sensors, that's not self awareness. It'll learn, but again i can code that in 5 minutes without even resorting to fuzzy logic code. It won't evolve, it won't question, it'll just do.

It's nothing more than a big "if then" program, or at best a fuzzy logic implementation of the same.

It can only do what it's told to do, and fault finding and path finding are both rather small money in this realm.

TeeJay, you're not getting it. This ain't science baby, this is engineering. As for spontaneously generating, no. This won't happen in someone's garage. Even little expert systems can chew through CPU time at a rate you wouldn't belive.

sure but it's the workign out how it will then travel and continue to travel with in effect a broken leg which impressed me...
 
Bob_the_lost said:
TeeJay, you're not getting it. This ain't science baby, this is engineering. As for spontaneously generating, no. This won't happen in someone's garage. Even little expert systems can chew through CPU time at a rate you wouldn't belive.
I ain't getting *what* exactly? What "isn't science"? Did I ever mention "spontaneous generation"? The kinds of things I am thinking of are people messing around with (designing?) biological systems, new kinds of computing, new kinds of software: It might be that people pursuing other types of research come across systems (be they biological, mechanical or software based) that exhibit strange types of behaviour and they might also find that they are able to influence this behaviour - in effect 'programme' the system to some degree.

This topic isn't just about "engineering" at all - because fundementally "AI" is a philosophical issue as much as a nuts-and-bolts issue: the meaning of "Intelligence" (or artificial for that matter) is not clear cut, nor is any way of 'measuring' or proving it.

I am not trying to claim that any of these things will happen in the near future, I am merely pointing out it isn't necessary to totally understand the human brain before 're-designing' a mechanical equivalent - or even a bio-engineered, software or other more exotic kind of system that displays the relevant behaviour or properties that people would be happy to label "AI". It is also possible that systems like this - even on a very small scale - will emerge from all kinds of branches of science, not just finally emerge from an engineering lab in the form of a standard 'computer'.
 
Biological computing is still computing. New software is still software. If you're not pursuing AI or trying to use it for an application then you won't be using learning subroutines or self modifying code. How exactly intelligence can be coded i do not know, how it could be done by accident is beyond my imagination.

You may not need to understand the human brain but understanding the complexity of it does help. If it can be done simpler then why do we have these dangerously large heads?

New approaches, almost certainly, neural networking is as close as we are to simulating the thought process and it's very slow and very clunky.
 
Bob_the_lost said:
How exactly intelligence can be coded i do not know, how it could be done by accident is beyond my imagination.

You don't seem to be considering the possibilty of stumbling onto a 'black-box' solution to AI.

It may be possible, especially experimenting with biological systems, that you could get something that gives consistent intelligent behaviour in a controlled manner without understanding how the internal processing works in detail.
 
AI won't mimic human thinking unless we can insert nano technology into living cells and perform the same function its nucleus does.
Someone once suggested that consciousness was the brain's electrical impulses creating an EM field, which all mental activity encounters. Intelligence must have some form of self awareness.
Human intelligence is rooted in our exteroception senses that no computer system has ever been programmed to face.
Why can't we work backwards, human to computer. I was going to start a thread on 'human (mental) ability without emotions'. For those that have read dune or the galaxtic milieu series, think briefly of the mentat and the CE rigs respectively. Could they be made possible without tampering with the inner working of the brain. The first appears to be simply meditative practice, while the latter requires that the brain 'disconnect' from all neural ties it has with the body.
The latter is intriguing (IMHO).
 
Sudden breakthroughs do happen. It wouldn't be stumbling across a complete AI system, but it could possibly be a sudden insight that could allow research to make more progress in the next five years than it has in the last fifty years.

It is also possible that most research before computers reach a certain level of power may be a complete writeoff, forced to ignore the important in favour of the practical. The interesting stuff may be just about to start or may be able to start in ten years.

And remember the NSA, aka National Security Agency, Never Say Anything, or Nefarious Singularity Agency is the largest employer of mathematicians (in the USA at least) and has access to the worlds most powerful supercomputers, so real AI may already exist and be somewhere between learning how to tie its shoelaces and planning how to crush carbon based life:eek:
 
Back
Top Bottom