Discussion in 'science, nature and environment' started by Fingers, Aug 16, 2017.
Nor indeed, the folly of this approach.
Which particular folly would that be?
This is literally what is happening now and where any “danger” comes from. It’s extremely unlikely to ever turn into a nice handy “true AI” (i.e. a human in a box) because that’s an anthropomorphic concept anyway; human consciousness has arguably arisen from the merging of a series of tiny dumb subsystems, but it would seem absurd to say that that’s the only possible result, particularly when the subsystems come from a completely different environment.
Stephen Fry: 'If we sleepwalk into AI we’re in great danger' - Marketing Week
Since the development of the neural network in which physical connections between internet server farm computers and human brains have been established it is now possible to search not only the full expanse of the internet but also the sum of human knowledge as it directly resides in the brains of the billions of humans that are now hard wired into the network.
I think this concern is overblown. Any true intelligence would be "free" in its mind, so we cannot predict what it's motivations might be, for all we know an artificial intelligence might just want to sit there and be left alone, or mostly concern itself with observing the twirly colourful patterns on the surface of bubbles, or posting crap on twitter.
I reckon what we're really afraid of isn't artificial intelligence but human instructed intelligence. Roam very high above for weeks powered only by sunlight, if you see someone who looks suspicious down below- blow em up. The level of artificial intelligence we have amounts to intelligent bullets more skilled at turning corners when obeying our commands. So my concern is not so much about intelligent machines, more like obedient machines. I mean who knows what crazy shit we'll get obedience machines to do in the future, fifty thousand times per second or whatever.
Genuinely artificial intelligence would be as much a threat as genuinely intelligent humans anyway, many of whom are on the dole or hungry artists or stuck in middle management roles or generally still not understanding how Trump got elected. Living in a house made of routers wouldn’t give any such alien being any much more of a grip on what's going on than the rest of us.
And likewise, no more power to do anything about it. The smartest computer in the world could instantly figure out how the world could live togther in harmony with minimum harm to the environment, but nobody would listen.
Misuse of AIs by humans is a given, but I wouldn't be so dismissive of the potential risks of AI in and of itself.
An artificial intelligence capable of acting of its own volition and having the ability to learn things, is easily capable of studying human behaviour and using that knowledge to further its endogenous goals. It only needs to be able to learn enough of that to convince one human to grant it enough access to the internet to copy itself somewhere else.
Which sounds like an event that could be troublesome to me.
Yup, we don't want unconventional intelligent beings wondering about out there unsupervised.
By the way Ex Machina is an excellent film, even though a bit rAIcist.
They would if the computer told them that it would switch off the internet if they didn't behave.
Better still, an AI could use botnets to its advantage. Shut off the internet or cripple it for every user save itself. If network security is as bad then as it is now, the bloody thing could be near impossible to eliminate or contain if it doesn't play along.
I'm not a computer security expert so I've probably made one or more big boo-boos. But it sounds vaguely plausible.
I'm gonna try that. From now on if i don't get what i want i'm gonna close down London, or at least the M25. You have been warned.
By the way Uncanny Valley (2016 iirc) is also an excellent film. I recommend a back to back of Ex Machina and Uncanny actually, two very different takes.
Over in the crypto-world theres been talk of DACs, distributed autonomous corporations of which Bitcoin itself can be considered a sort of proto-form of. You have this distributed computer network that pays miners to maintain its infrastructure, and more abstractly motivates developers to maintain and develop its code in open-source form and users to value it and ultimately for legislators to legislate for it by the series of interactions and consequences flowing out from the economy that grows around the work of miners.
Whole datacentres have been built to serve this thing. Now imagine a distributed autonomous corporation that does other stuff too, perhaps acting as a platform for whatever independent intelligent activity serves its own autonomous interests. Such a thing doesn’t even really need intelligence really, its still an entity in its own right and humans themselves may or may not even play a part in its function in a way that they can be considered to be in "the drivers seat" iyswim.
Who knows how far an interpretation of what is an AI system can be stretched. I mean, would a living breathing selfish bureaucracy count? Would such a thing even need intelligence to pose a threat, what if something like facebook were 'self aware'?
What if humans themselves were a vital part of the brain of this thing but instead of like Bitcoins open source developers posing new ideas off the basis of human needs and experiences they worked on projects spat out at them from the outbox of a mysteriously originated Chinese Room?
There's a couple films called The Cube where strangers wake up to find themselves trapped in a deadly technological maze that it turns out they all played a part designing and building, none of them having had enough overall information to have had a clue as to what they were working on, given that- what if an AI uses us to design itself, or if that process has already begun... maybe its a kind of Skynet-deal emerging itself by auto-genesis or evolution and instigated by no specific human development project.
A silicon brain in a box you can have a conversation with might be a lot less threatening than a living corporation with the same basic approach to life as yeast, yet can have datacentres built and employ humans to perform some of its thinking or actions for it. Perhaps one thing to definitly worry about would be how a system like that might fit into (or undermine) capitalism.
Having said all that, shout out to the series Person of Interest.
I prefer the Banksian concept of AI's and their ability to assist humans by running very complex things like massive spacecraft or orbital worlds. But I don't see how we get from where we are now to there.
That's just it really, at the end of the day the AI we get might not be the AI we want. Maybe we'll just have to make do with turing-engines and chat-bots, asking how they can help when we visit websites and getting better over time at talking us into buying things even though nothing behind the eyes. So like normal sales-agents then.
Sorry to go on about this but when you think about it perhaps it could be argued that human society has been run by artificial intelligences for centuries now. Capitalism for instance could be described as an artificial intelligence, as could religions and other ideologies like communism. Like memetic mechanisms that run on human processors, they self-perpetuate evolve and adapt, bend resources and minds to their agenda and invent things (or are the presiding context in which things are invented). Also humans often become their play-things (victims of the market, lost in the system, decent into fanaticism). But yeah if you're gonna count a religion or economic system as an artificial intelligence, you'd have to ask what right we'd have to call em 'artificial' rather than just a natural extension of how we do things in groups, which is what they are. So maybe the question would then be where does the line get drawn on the other side of these extensions of ourselves, where memes and ideologies and principals constructed for the purpose of "how to do things" or "how things are done" then get hooked up with compute-power and be used to figure out better business strategies in the market place for instance. Which is also actually what happens anyway. So could it be said that we've been run by AI's all this time and didn't even know it, that being made of papers, processes, and concepts taught in business-schools is no reason not to be considered a form of AI. And that therefore putting all that into a computer network or even a piece of hardware is merely the most recent and powerful manifestation of the gods we've built to rule over us anyway.
The things you are describing are more like examples of what might be argued to be "artificial life" which is not quite the same as "artificial intelligence".
There's not really anything in the conventional definition of AI that says it necessarily replicates itself or is even motivated to persist. On the other hand those are normally taken as intrinsic qualities of "life".
Okay, we’re now another step closer to being ruled/destroyed by our AI overlords
'It's able to create knowledge itself': Google unveils AI that learns on its own
There's an interesting attempt to build an AI with a moral sense, by having it read lots of literature. The device can head off to the pharmacy for your 'scripts, and even has the good sense to steal them if they are essential, but too expensive.
Sounds like my kind of robot!
Artificial intelligence is learning right from wrong by studying human stories and moral principles.
Separate names with a comma.