Friday, 31 March 2017
Avoiding the Robot Apocalypse
As Apocalypse scenarios go, this one is quite familiar : Somebody builds a robot that can actually 'think' for itself; then in flagrant disregard of Asimov's 3 Laws entailing doing no harm to humans, the robot runs amok and proceeds to wipe out the entire human race.
Luckily, we've watched that movie, come out of the movie theater (alive) and managed to put our Terminator and Skynet fears to bed.
Not so Elon Musk, and other AI futurists, who envision bad things happening as a result of the rapid advancement and unintended consequences from developments in artificial intelligence. This is according to Maureen Dowd, writing in the March issue of Vanity Fair magazine : Elon Musk's Billion-Dollar Crusade To Stop The A.I Apocalypse.
The premise of the idea of humans inventing robot overlords is that human intelligence and ability is at statis; that we'll be able to incubate super intelligence in machines that will, to our detriment, surpass our own.
The best guard we have against rogue A.I, according to some of the Silicon Valley luminaries interviewed in the piece, is to have a 'kill switch' to shut them down. Or, in a strange case of 'if you can't beat them, join them', it's suggested that we turn ourselves into cyborgs: half human, half machine.
If A.I is going to be this much of a threat to the survival of the species, then, like in Jaws - the movie, we're clearly going to need a bigger boat. This bigger boat may be provided by our own unexplored human potential.
For all the human ingenuity that has brought us to this stage of technological development, where we're capable of creating our own likeness in machine form, no human invention so far surpasses the prototypes found in Nature. Is the camera a better piece of design engineering than the human eye, for example?
Nature hides the complexity of her design genius in the cloak of simplicity, efficiency and economy. By comparison, our most vaunted inventions have a primitive crudeness that we cannot see.
On that basis, the human being, the current pinnacle of Nature's iterative design improvement process(evolution), is more marvellous and wonderous than any robot or 'super' intelligence, humans themselves could hope to create.
We were not given the user manual for how we work, so, since the beginning of the biological sciences, we've been trying to piece it together with reverse engineering. Neuroscience can't fully explain how the brain works, or where consciousness and the sense of personal identity come from.
If we could somehow get hold of Nature's user manual for humans, might there not be a chapter at the back entitled 'Advanced Functions'? There's enough scope in our ignorance, for us not to rule out the possibility that there's more to us than meets the eye.
After all, only 2% of our DNA actually codes for the protein of which our bodies are made. Is the remaining 98% really 'junk' DNA? In recent years within genetics, it is being discovered that the term 'junk' DNA is a misnomer or fig leaf for our ignorance. And that, in fact, there is some hitherto unknown function in this non-coding DNA.
Some of this function is regulatory - the equivalent of a traffic cop directing the traffic of gene expression that makes us who we are and which could determine illness, ageing* and a range of different abilities .
Here's how New York Times bestselling author, Yuval Noah Harari, summarises the idea of unrealised human potential, in his book Homo Deus:
"Biological engineering starts with the insight that we are far from realising the full potential of organic bodies. For 4 billion years natural selection has been tweaking and tinkering with these bodies, so that we have gone from amoeba to reptiles to mammals to Sapiens. Yet there is no reason to think that Sapiens is the last station. Relatively small changes in genes, hormones and neurons were enough to transform Homo erectus - who could produce nothing more impressive than flint knives - into Homo sapiens who produces spaceships and computers. Who knows what might be the outcome of a few more changes to our DNA..."
So how might Superhumans avoid the Robot Apocalypse?
The answer may relate more to survival. When a calculus of human existential risk is done, catastrophic climate change, a pandemic as a result of anti-microbial resistance or a geopolitical-related nuclear mishap, are more proximate than being done in by the robots.
So if we can survive or avoid the first 3 risks, then outsmarting the smartest robots will be a piece of cake . We must have been rehearsing it for some time now, in the popular imagination, through all the marvel movies we've been watching.
* Superhumans are really coming - unpacking junk DNA
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment